Clojure web development – state of the art – part 2

This is part 2 of my “Clojure web development” series. You can discuss first part on this reddit thread. After reading the comments I must explain two assumptions I had writing this series:

  • Keep things easy to understand for people from outside Clojure land, especially Java devs. That’s why I use REST/JSON in favor of transit and component as a “dependency injection” implementation which could be easily explained as Spring framework equivalent. The same goes with Om which is a bit verbose, but in my opinion it’s easier to understand for a start and has wider adoption than the other React wrappers.

  • Keep things easy to bootstrap on a developer machine. This is a hands-on walkthrough and all the individual steps have been committed to GitHub. That’s why I use MongoDB, which could not be the best choice for scaling your application to millions of users, but it’s perfect for bootstrapping - no schema, tables, just insert data and start working. I highly recommend Honza Kral polyglot persistence talk, where he encourages starting simple and optimize for developer happiness at the start of a project.

In previous post we bootstrapped a basic web application serving REST data with (static for now) Clojurescript frontend, all fully reloadable thanks to reloaded repl and figwheel. You can find final working version of it in this branch.

Today we’re going to display contact list stored in MongoDB. I assume you have MongoDB installed, if not - it’s trivial with docker.

Serving contact list from database

OK, let’s start. In the backend we need to add some dependencies to project.clj:

:dependencies
   ...
   [org.danielsz/system "0.1.9"]
   [com.novemberain/monger "2.0.0"]]

monger is an idiomatic Clojure wrapper for Mongo Java driver and system is a nice collection of components for various datastores, including Mongo (a bit like Spring Data for Spring).

In order to interact with a data store I like to use the concept of abstract repository. This should hide the implementation details from the invoker and allows to switch to another store in the future. So let’s create an abstract interface (in Clojure - protocol) in components/repo.clj:

(ns modern-clj-web.component.repo)

(defprotocol ContactRepository
  (find-all [this]))

We need this as a parameter to allow Clojure runtime dispatching correct implementation of this repository. Mongo implementation with Monger is really simple:

(ns modern-clj-web.component.repo
   (:require [monger.collection :as mc]
             [monger.json]))
...

(defrecord ContactRepoComponent [mongo]
  ContactRepository
  (find-all [this]
    (mc/find-maps (:db mongo) "contacts")))

(defn new-contact-repo-component []
  (->ContactRepoComponent {}))

Things to notice here:

  • mc/find-maps just returns all records from collection as Clojure maps
  • ContactComponent gets injected with mongo component created by system library, which adds Mongo client under :db key
  • Since this component is stateless, we don’t need to implement component/Lifecycle interface, but it still can be wired in system like a typical lifecycle-aware component
  • Requiring monger.json adds JSON serialization support for Mongo types (e.g. ObjectId)

Ok, it’s now time to use our new component in the endpoint/example.clj:

(:require
  ...
  [modern-clj-web.component.repo :as r])

(defn example-endpoint [{repo :contact-repo}]
 (routes
...
   (GET "/contacts" [] (response (r/find-all repo)))

The {repo :contact-repo} notation (destructuring) automatically binds :contact-repo key from system map to the repo value. So we need to assign our component to that key in system.clj:

(:require
  ...
  [modern-clj-web.component.repo :refer [new-contact-repo-component]]
  [system.components.mongo :refer [new-mongo-db]])

 (-> (component/system-map
          :app  (handler-component (:app config))
          :http (jetty-server (:http config))
          :example (endpoint-component example-endpoint)
          :mongo (new-mongo-db (:mongo-uri config))
          :contact-repo (new-contact-repo-component))
         (component/system-using
          {:http [:app]
           :app  [:example]
           :example [:contact-repo]
           :contact-repo [:mongo]}))))

In short - we use system’s new-mongo-db to create Mongo component, make it a dependency to repository which itself is a dependency of example endpoint.

And finally we need to configure :mongo-uri config property in config.clj:

(def environ
  {:http {:port (some-> env :port Integer.)}}
   :mongo-uri "mongodb://localhost:27017/contacts"})

To check if it works fine, restart repl, type (go) again and make a GET to http://localhost:3000/contacts.

curl http://localhost:3000/contacts
[]

OK, so we got empty list since we don’t have any data in Mongo database. Let’s add some with mongo console:

mongo localhost:27017/contacts
MongoDB shell version: 2.4.9
connecting to: localhost:27017/contacts
>  db.contacts.insert({firstname: "Al", lastname: "Pacino"});
>  db.contacts.insert({firstname: "Johnny", lastname: "Depp"});

And finally our endpoint should return these two records:

curl http://localhost:3000/contacts
[{"lastname":"Pacino","firstname":"Al","_id":"56158345fd2dabeddfb18799"},{"lastname":"Depp","firstname":"Johnny","_id":"56158355fd2dabeddfb1879a"}]

Sweet! Again - in case of any problems, check this commit.

Getting contacts from ClojureScript

In this step we’ll fetch the contacts with AJAX call on our ClojureScript frontend. As usual, we need few dependencies in project.clj for a start:

:dependencies 
     ...
       [org.clojure/clojurescript "1.7.48"]
       [org.clojure/core.async "0.1.346.0-17112a-alpha"]
       [cljs-http "0.1.37"]

ClojureScript should be already visible by using figwheel, but it’s always better to require specific version explicitly. cljs-http is a HTTP client for ClojureScript and core.async provides facilities for asynchronous communication in CSP model, especially useful in ClojureScript. Let’s see how it works in practice.

To make an AJAX call we need to call methods from cljs-http.client, so let’s add this in core.cljs:

(ns ^:figwheel-always modern-clj-web.core
  (:require [cljs-http.client :as http]))

(println (http/get "/contacts"))

You should see #object[cljs.core.async.impl.channels.ManyToManyChannel]. What madness is that???

This is the time we enter the core.async. The most common way to deal with asynchronous network calls from Javascript is by using callbacks or promises. The core.async way is by using channels. It makes your code look more like a sequence of synchronous calls and it’s easier to reason about. So the http/get function returns a channel on which the result is published when response arrives. In order to receive that message we need to read from this channel by using <! function. Since this is blocking, we also need to surround this call with go macro, just like in go language. So the correct way of getting contacts looks like this:

(:require
...
  [cljs.core.async :refer [<! >! chan]])

(go
  (let [response (<! (http/get "/contacts"))]
    (println (:body response))))

Adding Om component

Dealing with frontend code without introducing any structure could quickly become a real nightmare. In late 2015 we have basically two major JS frameworks on the field: Angular nad React. ClojureScript paradigms (functional programming, immutable data structures) fit really nice into React philosophy. In short, React application is composed of components taking data as input and rendering HTML as output. The output is not a real DOM, but so-called virtual DOM, which helps calculating diff from current view to updated one.

Among many React wrappers in ClojureScript I like using Om with om-tools to reduce some verbosity. Let’s introduce it into our project.clj:

:dependencies 
     ...
   [org.omcljs/om "0.9.0"]
   [prismatic/om-tools "0.3.12"]

To render a “hello world” component we need to add some code in core.cljs:

(:require 
   ...
            [om.core :as om]
            [om-tools.core :refer-macros [defcomponent]]
            [om-tools.dom :as dom :include-macros true]))

(def app-state (atom {:message "hello from om"}))

(defcomponent app [data owner]
  (render [_]
    (dom/div (:message data))))

(om/root app app-state
         {:target (.getElementById js/document "main")})

What’s going on here? The main concept of Om is keeping whole application state in one global atom, which is Clojure way of managing state. So we pass this app-state map (wrapped in atom) as a parameter to om/root which mounts components into real DOM (<div id="main"/> from index.html). The app component just displays the :message value, so you should see “hello from om” rendered. If you have fighweel running, you can change the message value, and it should be updated instantly.

And finally let’s render our contacts with Om:

(defn get-contacts []
  (go
    (let [response (<! (http/get "/contacts"))]
      (:body response))))

(defcomponent contact-comp [contact _]
  (render [_]
    (dom/li (str (:firstname contact) " " (:lastname contact)))))

(defcomponent app [data _]
  (will-mount [_]
    (go
      (let [contacts (<! (get-contacts))]
        (om/update! data :contacts contacts))))
  (render [_]
    (dom/div
      (dom/h2 (:message data))
      (dom/ul
        (om/build-all contact-comp (:contacts data))))))

So the contact-comp is just rendering a single contact. We use om/build-all to render all contacts visible in :contacts field in global state. And most tricky part - we use will-mount lifecycle method to get contacts from server when app component is about to be mounted to the DOM.

Again, in this commit should be a working version in case of any problems.

And if you liked Om, I highly recommend official tutorials and Zero to Om series.

Clojure web development – state of the art

It’s now more than a year that I’m getting familiar with Clojure and the more I dive into it, the more it becomes the language. Once you defeat the “parentheses fear”, everything else just makes the difference: tooling, community, good engineering practices. So it’s now time for me to convince others. In this post I’ll try to walktrough a simple web application from scratch to show key tools and libraries used to develop with Clojure in late 2015.

Note for Clojurians: This material is rather elementary and may be useful for you if you already know Clojure a bit but never did anything bigger than hello world application.

Note for Java developers: This material shows how to replace Spring, Angular, grunt, live-reload with a bunch of Clojure tools and libraries and a bit of code.

The repo with final code and individual steps is here.

Bootstrap

I think all agreed that component is the industry standard for managing lifecycle of Clojure applications. If you are a Java developer you may think of it as a Spring (DI) replacement - you declare dependencies between “components” which are resolved on “system” startup. So you just say “my component needs a repository/database pool” and component library “injects” it for you.

To keep things simple I like to start with duct web app template. It’s a nice starter component application following the 12-factor philosophy. So let’s start with it:

lein new duct clojure-web-app +example

The +example parameter tells duct to create an example endpoint with HTTP routes - this would be helpful. To finish bootstraping run lein setup inside clojure-web-app directory.

Ok, let’s dive into the code. Component and injection related code should be in system.clj file:

(defn new-system [config]
  (let [config (meta-merge base-config config)]
    (-> (component/system-map
         :app  (handler-component (:app config))
         :http (jetty-server (:http config))
         :example (endpoint-component example-endpoint))
        (component/system-using
         {:http [:app]
          :app  [:example]
          :example []}))))

In the first section you instantiate components without dependencies, which are resolved in the second section. So in this example, “http” component (server) requires “app” (application abstraction), which in turn is injected with “example” (actual routes). If your component needs others, you just can get then by names (precisely: by Clojure keywords).

To start the system you must fire a REPL - interactive environment running within context of your application:

lein repl

After seeing prompt type (go). Application should start, you can visit http://localhost:3000 to see some example page.

A huge benefit of using component approach is that you get fully reloadable application. When you change literally anything - configuration, endpoints, implementation, you can just type (reset) in REPL and your application is up-to-date with the code. It’s a feature of the language, no JRebel, Spring-reloaded needed.

Adding REST endpoint

Ok, in the next step let’s add some basic REST endpoint returning JSON. We need to add 2 dependencies in project.clj file:

:dependencies
 ...
  [ring/ring-json "0.3.1"]
  [cheshire "5.1.1"]

Ring-json adds support for JSON for your routes (in ring it’s called middleware) and cheshire is Clojure JSON parser (like Jackson in Java). Modifying project dependencies if one of the few tasks that require restarting the REPL, so hit CTRL-C and type lein repl again.

To configure JSON middleware we have to add wrap-json-body and wrap-json-response just before wrap-defaults in system.clj:

(:require 
 ...
 [ring.middleware.json :refer [wrap-json-body wrap-json-response]])

(def base-config
   {:app {:middleware [[wrap-not-found :not-found]
                      [wrap-json-body {:keywords? true}]
                      [wrap-json-response]
                      [wrap-defaults :defaults]]

And finally, in endpoint/example.clj we must add some route with JSON response:

(:require 
 ...
 [ring.util.response :refer [response]]))

(defn example-endpoint [config]
  (routes
    (GET "/hello" [] (response {:hello "world"}))
    ...

Reload app with (reset) in REPL and test new route with curl:

curl -v http://localhost:3000/hello

< HTTP/1.1 200 OK
< Date: Tue, 15 Sep 2015 21:17:37 GMT
< Content-Type: application/json; charset=utf-8
< Set-Cookie: ring-session=37c337fb-6bbc-4e65-a060-1997718d03e0;Path=/;HttpOnly
< X-XSS-Protection: 1; mode=block
< X-Frame-Options: SAMEORIGIN
< X-Content-Type-Options: nosniff
< Content-Length: 151
* Server Jetty(9.2.10.v20150310) is not blacklisted
< Server: Jetty(9.2.10.v20150310)
<
* Connection #0 to host localhost left intact
{"hello": "world"}

It works! In case of any problems you can find working version in this commit.

Adding frontend with figwheel

Coding backend in Clojure is great, but what about the frontend? As you may already know, Clojure could be compiled not only to JVM bytecode, but also to Javascript. This may sound familiar if you used e.g. Coffeescript. But ClojureScript philosophy is not only to provide some syntax sugar, but improve your development cycle with great tooling and fully interactive development. Let’s see how to achieve it.

The best way to introduce ClojureScript to a project is figweel. First let’s add fighweel plugin and configuration to project.clj:

:plugins
   ...
   [lein-figwheel "0.3.9"]

And cljsbuild configuration:

:cljsbuild
    {:builds [{:id "dev"
               :source-paths ["src-cljs"]
               :figwheel true
               :compiler {:main       "clojure-web-app.core"
                          :asset-path "js/out"
                          :output-to  "resources/public/js/clojure-web-app.js"
                          :output-dir "resources/public/js/out"}}]}

In short this tells ClojureScript compiler to take sources from src-cljs with figweel support and put resulting JavaScript into resources/public/js/clojure-web-app.js file. So we need to include this file in a simple HTML page:

<!DOCTYPE html>
<html>
<head>
</head>
<body>
  <div id="main">
  </div>
  <script src="js/clojure-web-app.js" type="text/javascript"></script>
</body>
</html>

To serve this static file we need to change some defaults and add corresponding route. In system.clj change api-defaults to site-defaults both in require section and base-config function. In example.clj add following route:

(GET "/" [] (io/resource "public/index.html"))

Again (reset) in REPL window should reload everything.

But where is our ClojureScript source file? Let’s create file core.cljs in src-cljs/clojure-web-app directory:

(ns ^:figwheel-always clojure-web-app.core)

(enable-console-print!)

(println "hello from clojurescript")

Open another terminal and run lein figwheel. It should compile ClojureScript and print ‘Prompt will show when figwheel connects to your application’. Open http://localhost:3000. Fighweel window should prompt:

To quit, type: :cljs/quit
cljs.user=>

Type (js/alert "hello"). Boom! If everything worked you should see and alert in your browser. Open developers console in your browser. You should see hello from clojurescript printed on the console. Change it in core.cljs to (println "fighweel rocks") and save the file. Without reloading the page your should see updated message. Fighweel rocks! Again, in case of any problems, refer to this commit.

UPDATE: In the latest duct release, there is an +cljs option which make it possible to use reloaded repl and fighweel in a single REPL. Highly recommended!

In the next post I’ll show how to fetch data from MongoDB, serve it with REST to the browser and write ReactJs/Om components to render it. Stay tuned!

Micro services on the JVM part 1 – Clojure

Micro services could be a buzzword of 2014 for me. Few months ago I was curious to try Dropwizard framework as a separate backend, but didn’t get the whole idea yet. But then I watched a mind-blowing “Micro-Services Architecture” talk by Fred George. Also, the 4.0 release notes of Spring covers microservices as an important rising trend as well. After 10 years of having SOA in mind, but still developing monoliths, it’s a really tempting idea to try to decouple systems into a set of independently developed and deployed RESTful services.

So when I decided to write a simple API for my DevRates.com website, instead of adding some code to existing codebase, I wanted to build a separate tiny app. But what’s the best stack for micro-services? In this series of posts I’ll try to compare various JVM technology stacks for this approach.

Here is my list of must-have features for the stack:

  • declarative REST support (no manual URL parsing)
  • native JSON support (bidirectional JSON-object mapping)
  • single “fat” jar packaging, no web container needed
  • fast development feedback loop (eg. runtime code reloading)
  • Swagger and Metrics integration

In this post I’ll try to cover Clojure with Ring and Compojure.

TL;DR

You can find all the covered concepts in the following GitHub examples:

Basic setup

There is an excellent Zaiste’s tutorial showing how to kickstart REST app with Compojure, just follow these few simple steps (the rest of the post assumes compojure-rest as the app name).

My sample route from handler.clj:

(defroutes app-routes
  (GET "/messages/:name" [name] {:body {:message (str "Hello World" " " name)}})
  (route/resources "/")
  (route/not-found "Not Found"))

Fat jar

In a simple setup, Compojure app is being run through lein ring plugin. To enable running it as a standalone command-line app, you have to write a main method which starts Jetty server.

project.clj

 :dependencies ...
         [ring/ring-jetty-adapter "1.2.0"]
         ..
 :main compojure-rest.handler

handler.clj

(ns compojure-rest.handler
 ...
 (:require ...
   [ring.adapter.jetty :refer (run-jetty)])
   (:gen-class))
...
(defn -main [& args]
  (run-jetty app {:port 3000 :join? false }))

To build a single “fat” jar just run lein uberjar, and then java -jar target/compojure-rest-0.1.0-SNAPSHOT-standalone.jar runs the app.

Swagger

The nice thing about Compojure is that you can easy expose Swagger documentation by using swag library. There are some conflicts between swag and ring lein plugin, so just look at the compojure-swag for a working example.

Here is a typical snippet from handler.clj:

(set-base "http://localhost:3000")

(defroutes- messages {:path "/messages" :description "Messages management"}
  (GET- "/messages/:name" [^:string name] {:nickname "getMessages" :summary "Get message"}
      {:body {:message (str "Hello World" " " name)}})
  (route/resources "/")
  (route/not-found "Not Found"))

So, swag introduces defroutes-, GET-, POST- which take additional metadata as parameters to generate Swagger docs. If you’re little scared with this ^:string fragment - check metadata section from Clojure manual. Swagger-compatible definition should be available at http://localhost:3000/api-docs.json after running the app.

Metrics

To expose basic metrics of your REST API calls just use Ring-compatible metrics-clojure-ring library.

project.clj

 :dependencies ...
         [metrics-clojure-ring "1.0.1"]
         ...

handler.clj

(ns compojure-rest.handler
 ...
 (:require ...
    [metrics.ring.expose :refer [expose-metrics-as-json]]
    [metrics.ring.instrument :refer [instrument]]))
...
(def app (expose-metrics-as-json (instrument app) "/stats/"))

After generating some load by eg. wrk, you can check the collected stats by visiting http://localhost:3000/stats/.

ring.requests.rate.GET: {
    type: "meter",
        rates: {
        1: 189.5836593065824,
        5: 39.21602480726734,
        15: 13.146759983907245
        }
    }

Some random Clojure thoughts

  • The best newbie guide to Clojure is Kyle Kingsbury’s “Clojure from the ground up” series.
  • Leiningen is probably the best build tool for the JVM. Easy to install, fast, simple, no XML - just doing it right. And the “new” project templates is what’s Maven been missing from ages (anyone using archetypes?).
  • Lighttable is great! I’m really impressed with the fast feedback loop by just ctrl+entering the expressions.
  • Also, live reloading with ring server works fine. Just change the change code and see the changes immediately. Rapid!
  • Unlike other recently popular languages, Clojure has no killer-framework. Rails, Play/Akka, Grails/Gradle - all of these are key parts of Ruby, Scala and Groovy ecosystems. What about Clojure? A collection of small (micro?) libraries doing one thing well and working great together - just like Unix commands.
  • It may be true that Clojure is not good for large projects. With all the complex contructs (meta or ) and no control of the visibility, it could be hard to maintain large codebase. But it’s not a first-class problem in a micro-services world..

Resources

Integration testing custom validation constraints in Jersey 2

I recently joined a team trying to switch a monolithic legacy system into set of RESTful services in Java. They decided to use latest 2.x version of Jersey as a REST container which was not a first choice for me, since I’m not a big fan of JSR-* specs. But now I must admit that JAX-RS 2.x is doing things right: requires almost zero boilerplate code, support auto-discovery of features and prefers convention over configuration like other modern frameworks. Since the spec is still young, it’s hard to find good tutorials and kick-off projects with some working code. I created jersey2-starter project on GitHub which can be used as starting point for your own production-ready RESTful service. In this post I’d like to cover how to implement and integration test your own validation constraints of REST resources.

Custom constraints

One of the issues which bothers me when coding REST in Java is littering your class model with annotations. Suppose you want to build a simple Todo list REST service, when using Jackson, validation and Spring Data, you can easily end up with this as your entity class:

@Document
public class Todo {
    private Long id;
    @NotNull
    private String description;
    @NotNull
    private Boolean completed;
    @NotNull
    private DateTime dueDate;

    @JsonCreator
    public Todo(@JsonProperty("description") String description, @JsonProperty("dueDate") DateTime dueDate) {
        this.description = description;
        this.dueDate = dueDate;
        this.completed = false;
    }
    // getters and setters
}

Your domain model is now effectively blured by messy annotations almost everywhere. Let’s see what we can do with validation constraints (@NotNulls). Some may say that you could introduce some DTO layer with own validation rules, but it conflicts for me with pure REST API design, which stands that you operate on resources which should map to your domain classes. On the other hand - what does it mean that Todo object is valid? When you create a Todo you should provide a description and due date, but what when you’re updating? You should be able to change any of description, due date (postponing) and completion flag (marking as done) - but you should provide at least one of these as valid modification. So my idea is to introduce custom validation constraints, different ones for creation and modification:

@Target({TYPE, PARAMETER})
@Retention(RUNTIME)
@Constraint(validatedBy = ValidForCreation.Validator.class)
public @interface ValidForCreation {
    //...
    class Validator implements ConstraintValidator<ValidForCreation, Todo> {
    /...
        @Override
        public boolean isValid(Todo todo, ConstraintValidatorContext constraintValidatorContext) {
            return todo != null
                && todo.getId() == null
                && todo.getDescription() != null
                && todo.getDueDate() != null;
        }
    }
}

@Target({TYPE, PARAMETER})
@Retention(RUNTIME)
@Constraint(validatedBy = ValidForModification.Validator.class)
public @interface ValidForModification {
    //...
    class Validator implements ConstraintValidator<ValidForModification, Todo> {
    /...
        @Override
        public boolean isValid(Todo todo, ConstraintValidatorContext constraintValidatorContext) {
            return todo != null
                && todo.getId() == null
                && (todo.getDescription() != null || todo.getDueDate() != null || todo.isCompleted() != null);
        }
    }
}

And now you can move validation annotations to the definition of a REST endpoint:

@POST
@Consumes(APPLICATION_JSON)
public Response create(@ValidForCreation Todo todo) {...}

@PUT
@Consumes(APPLICATION_JSON)
public Response update(@ValidForModification Todo todo) {...}

And now you can remove those NotNulls from your model.

Integration testing

There are in general two approaches to integration testing:

  • test is being run on separate JVM than the app, which is deployed on some other integration environment
  • test deploys the application programmatically in the setup block.

Both of these have their pros and cons, but for small enough servoces, I personally prefer the second approach. It’s much easier to setup and you have only one JVM started, which makes debugging really easy. You can use a generic framework like Arquillian for starting your application in a container environment, but I prefer simple solutions and just use emdedded Jetty. To make test setup 100% production equivalent, I’m creating full Jetty’s WebAppContext and have to resolve all runtime dependencies for Jersey auto-discovery to work. This can be simply achieved with Maven resolved from Shrinkwrap - an Arquillian subproject:

    WebAppContext webAppContext = new WebAppContext();
    webAppContext.setResourceBase("src/main/webapp");
    webAppContext.setContextPath("/");
    File[] mavenLibs = Maven.resolver().loadPomFromFile("pom.xml")
                .importCompileAndRuntimeDependencies()
                .resolve().withTransitivity().asFile();
    for (File file: mavenLibs) {
        webAppContext.getMetaData().addWebInfJar(new FileResource(file.toURI()));
    }
    webAppContext.getMetaData().addContainerResource(new FileResource(new File("./target/classes").toURI()));

    webAppContext.setConfigurations(new Configuration[] {
        new AnnotationConfiguration(),
        new WebXmlConfiguration(),
        new WebInfConfiguration()
    });
    server.setHandler(webAppContext);

(this Stackoverflow thread inspired me a lot here)

Now it’s time for the last part of the post: parametrizing our integration tests. Since we want to test validation constraints, there are many edge paths to check (and make your code coverage close to 100%). Writing one test per each case could be a bad idea. Among the many solutions for JUnit I’m most convinced to the Junit Params by Pragmatists team. It’s really simple and have nice concept of JQuery-like helper for creating providers. Here is my tests code (I’m also using builder pattern here to create various kinds of Todos):

@Test
@Parameters(method = "provideInvalidTodosForCreation")
public void shouldRejectInvalidTodoWhenCreate(Todo todo) {
    Response response = createTarget().request().post(Entity.json(todo));

    assertThat(response.getStatus()).isEqualTo(BAD_REQUEST.getStatusCode());
}

private static Object[] provideInvalidTodosForCreation() {
    return $(
        new TodoBuilder().withDescription("test").build(),
        new TodoBuilder().withDueDate(DateTime.now()).build(),
        new TodoBuilder().withId(123L).build(),
        new TodoBuilder().build()
    );
}

OK, enough of reading, feel free to clone the project and start writing your REST services!

Vert.x on Raspberry Pi

Lightweight JVM web server

Some people say that using Java on Raspberry Pi is a stupid idea. Sure, JVM uses a lot of system resources but when you can watch HD movies on RPi why don’t run a tiny Java app? But I do mean really tiny - no Tomcat nor any other application server involved, no war and deployment stuff. That’s why I came out with idea of using vert.x - a relatively new framework for writing modern web application. In this post I’ll try to show lessons learned after developing a sample app using vert.x and then running it on the Raspberry Pi.

Working with vert.x on RPi

Vert.x is node.js on steroids for JVM. In addition to all the libs for the JVM it offers you a lot more:

  • polyglot - write your application in any language that runs on the JVM (or mix a few)
  • really simple programming model
  • inter-app communication via event bus and shared data
  • ease of scalability and concurrency handling

If you want a gentle start in the world of vert.x I definitely suggest you reading the manual on project website.

Installation

First of all - install Java 8 early access build for ARM from Oracle website. Vert.x indeed requires Java 7, but the 8 build has a real hard float support.

The easiest way to manage vert.x installation is to use GVM:

curl -s get.gvmtool.net | bash
gvm install vertx

Now you should be able to run vertx from the command line.

Development process

In my approach I’m developing the app on my laptop and want to have straight path of running the code on RPI. Since I love the Intellij Idea 12 I put my sources into common maven/gradle structure and use Verticle interface to get full benefits of my IDE: code completion, refactoring and testing support. I use Groovy as a language for my verticles but I want the sources to be compiled on my laptop instead of RPI which a bit improves startup time of the app.

The examples for vert.x distribution are not reaaly development-friendly but I’ve found great example of Groovy starter project on GitHub which introduces two huge ideas: separating configuration of the app in single JSON file and a runVertx gradle task to build and run the app in a single step. I also use the web-server vert.x module to serve static web files of the app frontend. So I have SockJS and REST verticles in the backend and JavaScript frontend in a single project root.

Here is the final layout explained:

src/main/groovy - verticles code
    App.groovy - app lancher
src/main/test - verticles tests
web - frontend stuff
build.gradle - gradle script with runVertx task
run.sh - script for running app on RPI (no compilation)

And a typical development process looks like this:

  • write verticles code (*)
  • write unit tests for verticles (*)
  • run the app on laptop and modify the frontend
  • copy binaries to RPi and run the app from command line

(*) change the order of these two for TDD ;)

The only thing I’m only struggling from is lack of runtime reloading of verticles. Maybe using JRebel could help there but I ain’t got the licence (yet).

You can see my example project here.

Benchmark

I also did some benchmarking of vert.x app on RPI throughput. Sure, you cannot expect that single RPI could handle a lot of transactions per sec, so the test was just for fun. In the test I served a static CSS (bootstrap) and invoked a really simple REST resource (example details/Joe/123 mapping from RouteMatcher). I used siege with 2 and 200 consumers and compared the results between the very.x (1 instance), node.js express and nginx (just for static CSS). I installed node.js 0.8.18 from binaries found here.

Here are the results:

  • It is nice that serving static files is quite as fast as in the nginx. There is a benchmark showing that Monkey web server is a bit faster, but in my environment I wasn’t able to achieve more than 45 req/sec on nginx, so ~40 req/sec on vert.x is really good result.
  • Vert.x seems to be about 2-2.5 times faster than node.js with express. This is similar to Tim Fox’ benchmark with both node.js and vert.x in single instance. (BTW: I have no idea how to get 30k req/sec on a workstation as in Tim’s test).
  • The benchmark shows that RPi is indeed a toy PC - on my workstation it’s really easy to achieve thousands of requests per second. So forget about handling hundreds of requests per seconds on a single RPi instance.

(a bit) Speeding up startup time

The thing that really puts node.js in front is the startup time. Java is slow at both JVM startup and while classloading. My first application deployed on RPi took about 30 sec to start, while on my laptop it was about 2-3 seconds. It is possible to cut it down to 18 seconds, here are a few hints:

  • don’t compile sources directly on RPi - build a jar on your workstation. In my setup I run the server from gradle on my laptop but also have a bash script to run it on raspberry which passes vert.x a -cp parameter pointing to builded jar.
  • don’t run a SockJS server when not needed. It tooks about 5 sec to start. You can disable it by setting bridge to false in mod-web configuration.
  • you can write your own mod-web in just a few lines of code. You will save some milliseconds but also this setup allows you to pass custom RESTful routes to RouteMatcher. Look at this example:
    def rm = new RouteMatcher()
    rm.get('/details/:user/:id', { HttpServerRequest req ->
        req.response.end "User: ${req.params()['user']} ID: ${req.params()['id']}"
    } as Handler<HttpServerRequest>)

    // Catch / - serve the index page
    rm.get('/', { HttpServerRequest req ->
        req.response.sendFile("web/index.html")
    } as Handler<HttpServerRequest>)

    // Catch all - just send the file from web directory
    rm.getWithRegEx('.*', { HttpServerRequest req ->
        req.response.sendFile "web/${req.path}"
    } as Handler<HttpServerRequest>)    

To debug the most time consuming steps on app startup, you should configure log4j logging. Vert.x outputs some info about deploying verticles, you can also put your own logging messages. Just copy log4j.jar to vert.x lib directory and add -Dorg.vertx.logger-delegate-factory-class-name=org.vertx.java.core.logging.impl.Log4jLogDelegateFactory parameter in vert.x startup script.

Sample project

You can check these concepts by cloning a top monitor GitHub project. It is a Dashing-inspired web app showing results from calling a top command by lucid JQuery knobs. Every 2 seconds stats from the top call are pushed through eventBus (and internally by SockJS) to frontend.

The project itself it not smashing, you can just check the structure and the tests when I’m mocking ARM-specific top command results. Enjoy!

CasperJS for Java developers

Why CasperJS

Being a Java developer is kinda hard these days. Java may not be dead yet, but when keeping in sync with all the hipster JavaScript frameworks could make us feel a bit outside the playground. It’s even hard to list JavaScript frameworks with latest releases on one website.

In my current project, we are using AngularJS. It’a a nice abstraction of MV* pattern in frontend layer of any web application (we use Grails underneath). Here is a nice article with an 8-point Win List of Angular way of handling AJAX calls and updating the view. So it’s not only a funny new framework but a truly helper of keeping your code clean and neat.

But there is also another area when you can put helpful JS framework in place of plan-old-java one - functional tests. Especially when you are dealing with one page app with lots of asynchronous REST/JSON communication.

Selenium and Geb

In Java/JVM project the typical is to use Selenium with some wrapper like Geb. So you start your project, setup your CI-functional testing pipeline and… after 1 month of coding your tests stop working and being maintainable. The frameworks itselves are not bad, but the typical setup is so heavy and has so many points of failure that keeping it working in a real life project is really hard.

Here is my list of common myths about Selenium: * It allows you to record test scripts via handy GUI - maybe some static request/response sites. In modern web applications with asynchronous REST/JSON communication your tests must contain a lot of “waitFor” statements and you cannot automate where these should be included. * It allows you to test your web app against many browsers - don’t try to automate IE tests! You have to manually open your app in IE to see how it actually bahaves! * It integrates well with continuous integration servers like Jenkins - you have to setup Selenium Grid on server with X installed to run tests on Chrome or Firefox and a Windows server for IE. And the headless HtmlUnit driver lacks a lot of JS support.

So I decided to try something different and introduce a bit of JavaScript tooling in our project by using CasperJS.

Introduction

CasperJS is simple but powerful navigation scripting & testing utility for PhantomJS - scritable headless WebKit (which is an rendering engine used by Safari and Chrome). In short - CasperJS allows you to navigate and make assertions about web pages as they’d been rendered in Google Chrome. It is enough for me to automate the functional tests of my application.

If you want a gentle introduction to the world of CasperJS I suggest you to read: * Official website, especially installation guide and API * Introductionary article from CasperJS creator Nicolas Perriault * Highlevel testing with CasperJS by Kevin van Zonneveld * grails-angular-scaffolding plugin by Rob Fletcher with some working CasperJS tests

Full example

I run my test suite via following script:

casperjs test --direct --log-level=debug --testhost=localhost:8080 --includes=test/casper/includes/casper-angular.coffee,test/casper/includes/pages.coffee test/casper/specs/

casper-angular.coffe

casper.test.on "fail", (failure) ->
    casper.capture(screenshot)

testhost   = casper.cli.get "testhost"
screenshot = 'test-fail.png'

casper
    .log("Using testhost: #{testhost}", "info")
    .log("Using screenshot: #{screenshot}", "info")

casper.waitUntilVisible = (selector, message, callback) ->
    @waitFor ->
        @visible selector
    , callback, (timeout) ->
        @log("Selector [#{selector}] not visible, failing")
        withParentSelector selector, (parent) ->
            casper.log("Output of parent selector [#{parent}]")
            casper.debugHTML(parent)
        @echo message, "RED_BAR"
        @capture(screenshot)
        @test.fail(f("Wait timeout occured (%dms)", timeout))

withParentSelector = (selector, callback) ->
    if selector.lastIndexOf(" ") > 0
       parent = selector[0..selector.lastIndexOf(" ")-1]
       callback(parent)

Sample pages.coffee:

x = require('casper').selectXPath

class EditDocumentPage

    assertAt: ->
        casper.test.assertSelectorExists("div.customerAccountInfo", 'at EditDocumentPage')

    templatesTreeFirstCategory: 'ul.tree li label'
    templatesTreeFirstTemplate: 'ul.tree li a'
    closePreview: '.closePreview a'
    smallPreview: '.smallPreviewContent img'
    bigPreview: 'img.previewImage'
    confirmDelete: x("//div[@class='modal-footer']/a[1]")

casper.editDocument = new EditDocumentPage()

End a test script:

testhost = casper.cli.get "testhost" or 'localhost:8080'

casper.start "http://#{testhost}/app", ->
    @test.assertHttpStatus 302
    @test.assertUrlMatch /\/fakeLogin/, 'auto login'
    @test.assert @visible('input#Create'), 'mock login button'
    @click 'input#Create'

casper.then ->
    @test.assertUrlMatch /document#\/edit/, 'new document'
    @editDocument.assertAt()
    @waitUntilVisible @editDocument.templatesTreeFirstCategory, 'template categories not visible', ->
        @click @editDocument.templatesTreeFirstCategory
        @waitUntilVisible @editDocument.templatesTreeFirstTemplate, 'template not visible', ->
            @click @editDocument.templatesTreeFirstTemplate

casper.then ->
    @waitUntilVisible @editDocument.smallPreview, 'small preview not visible', ->
        # could be dblclick / whatever
        @mouseEvent('click', @editDocument.smallPreview)

casper.then ->
    @waitUntilVisible @editDocument.bigPreview, 'big preview should be visible', ->
        @test.assertEvalEquals ->
            $('.pageCounter').text()
        , '1/1', 'page counter should be visible'
        @click @editDocument.closePreview

casper.then ->
    @click 'button.cancel'
    @waitUntilVisible '.modal-footer', 'delete confirmation not visible', ->
        @click @editDocument.confirmDelete

casper.run ->
    @test.done()

Here is a list of CasperJS features/caveats used here:

  • Using CoffeeScript is a huge win for your test code to look neat
  • When using casper test command, beware of different (than above articles) logging setup. You can pass --direct --log-level=debug from commandline for best results. Logging is essential here since Phantom often exists without any error and you do want to know what just happened.
  • Extract your helper code into separate files and include them by using --includes switch.
  • When passing server URL as a commandline switch remember that in CoffeeScript variables are not visible between multiple source files (unless getting them via window object)
  • It’s good to override standard waitUntilVisible with capting a screenshot and making a proper log statement. In my version I also look for a parent selector and debugHTML the content of it - great for debugging what is actually rendered by the browser.
  • Selenium and Geb have a nice concept of Page Objects - an abstract models of pages rendered by your application. Using CoffeeScript you can write your own classes, bind selectors to properties and use then in your code script. Assigning the objects to casper instance will end up with quite nice syntax like @editDocument.assertAt().
  • There is some issue with CSS :first and :last selectors. I cannot get them working (but maybe I’m doing something wrong?). But in CasperJS you can also use XPath selectors which are fine for matching n-th child of some element (x("//div[@class='modal-footer']/a[1]")).
    Update: :first and :last are not CSS3 selectors, but JQuery ones. Here is a list of CSS3 selectors, all of these are supported by CasperJS. So you can use nth-child(1) is this case. Thanks Andy and Nicolas for the comments!

Working with CasperJS can lead you to a few hour stall, but after getting things working you have a new, cool tool in your box!

Using WsLite in practice

TL;DR

There is a example working GitHub project which covers unit testing and request/response logging when using WsLite.

Why Groovy WsLite ?

I’m a huge fan of Groovy WsLite project for calling SOAP web services. Yes, in a real world you have to deal with those - big companies have huge amount of “legacy” code and are crazy about homogeneous architecture - only SOAP, Java, Oracle, AIX…

But I also never been comfortable with XFire/CXF approach of web service client code generation. I wrote a bit about other posibilites in this post. With JAXB you can also experience some freaky classloading errors - as Tomek described on his blog. In a large commercial project the “the less code the better” principle is significant. And the code generated from XSD could look kinda ugly - especially more complicated structures like sequences, choices, anys etc.

Using WsLite with native Groovy concepts like XmlSlurper could be a great choice. But since it’s a dynamic approach you have to be really careful - write good unit tests and log requests. Below are my few hints for using WsLite in practice.

Unit testing

Suppose you have some invocation of WsLite SOAPClient (original WsLite example):

def getMothersDay(long _year) {
    def response = client.send(SOAPAction: action) {
       body {
           GetMothersDay('xmlns':'http://www.27seconds.com/Holidays/US/Dates/') {
              year(_year)
           }
       }
    }
    response.GetMothersDayResponse.GetMothersDayResult.text()
}

How can the unit test like? My suggestion is to mock SOAPClient and write a simple helper to test that builded XML is correct. Example using great SpockFramework:

void setup() {
   client = Mock(SOAPClient)
   service.client = client
}

def "should pass year to GetMothersDay and return date"() {
  given:
      def year = 2013
  when:
      def date = service.getMothersDay(year)
  then:
      1 * client.send(_, _) >> { Map params, Closure requestBuilder ->
            Document doc = buildAndParseXml(requestBuilder)
            assertXpathEvaluatesTo("$year", '//ns:GetMothersDay/ns:year', doc)
            return mockResponse(Responses.mothersDay)
      }
      date == "2013-05-12T00:00:00"
}

This uses a real cool feature of Spock - even when you mock the invocation with “any mark” (_), you are able to get actual arguments. So we can build XML that would be passed to SOAPClient's send method and check that specific XPaths are correct:

void setup() {
    engine = XMLUnit.newXpathEngine()
    engine.setNamespaceContext(new SimpleNamespaceContext(namespaces()))
}

protected Document buildAndParseXml(Closure xmlBuilder) {
    def writer = new StringWriter()
    def builder = new MarkupBuilder(writer)
    builder.xml(xmlBuilder)
    return XMLUnit.buildControlDocument(writer.toString())
}

protected void assertXpathEvaluatesTo(String expectedValue,
                                      String xpathExpression, Document doc) throws XpathException {
    Assert.assertEquals(expectedValue,
            engine.evaluate(xpathExpression, doc))
}

protected Map namespaces() {
    return [ns: 'http://www.27seconds.com/Holidays/US/Dates/']
}

The XMLUnit library is used just for XpathEngine, but it is much more powerful for comparing XML documents. The NamespaceContext is needed to use correct prefixes (e.g. ns:GetMothersDay) in your Xpath expressions.

Finally - the mock returns SOAPResponse instance filled with envelope parsed from some constant XML:

protected SOAPResponse mockResponse(String resp) {
    def envelope = new XmlSlurper().parseText(resp)
    new SOAPResponse(envelope: envelope)
}

Request and response logging

The WsLite itself doesn’t use any logging framework. We usually handle it by adding own sendWithLogging method:

private SOAPResponse sendWithLogging(String action, Closure cl) {
    SOAPResponse response = client.send(SOAPAction: action, cl)
    log(response?.httpRequest, response?.httpResponse)
    return response
}

private void log(HTTPRequest request, HTTPResponse response) {
    log.debug("HTTPRequest $request with content:\n${request?.contentAsString}")
    log.debug("HTTPResponse $response with content:\n${response?.contentAsString}")
}

This logs the actual request and response send through SOAPClient. But it logs only when invocation is successful and errors are much more interesting… So here goes withExceptionHandler method:

private SOAPResponse withExceptionHandler(Closure cl) {
    try {
        cl.call()
    } catch (SOAPFaultException soapEx) {
        log(soapEx.httpRequest, soapEx.httpResponse)
        def message = soapEx.hasFault() ? soapEx.fault.text() : soapEx.message
        throw new InfrastructureException(message)
    } catch (HTTPClientException httpEx) {
        log(httpEx.request, httpEx.response)
        throw new InfrastructureException(httpEx.message)
    }
}
def send(String action, Closure cl) {
    withExceptionHandler {
        sendWithLogging(action, cl)
    }
}

XmlSlurper gotchas

Working with XML document with XmlSlurper is generally great fun, but is some cases could introduce some problems. A trivial example is parsing an id with a number to Long value:

def id = Long.valueOf(edit.'@id' as String)

The Attribute class (which edit.'@id' evaluates to) can be converted to String using as operator, but converting to Long requires using valueOf.

The second example is a bit more complicated. Consider following XML fragment:

<edit id="3">
   <params>
      <param value="label1" name="label"/>
      <param value="2" name="param2"/>
   </params>
   <value>123</value>
</edit>
<edit id="6">
   <params>
      <param value="label2" name="label"/>
      <param value="2" name="param2"/>
   </params>
   <value>456</value>
</edit>

We want to find id of edit whose label is label1. The simplest solution seems to be:

def param = doc.edit.params.param.find { it['@value'] == 'label1' }
def edit = params.parent().parent()

But it doesn’t work! The parent method returns multiple edits, not only the one that is parent of given param

Here’s the correct solution:

doc.edit.find { edit ->
    edit.params.param.find { it['@value'] == 'label1' }
}

Example

The example working project covering those hints could be found on GitHub.