Advisory Messages to the rescue

The most crucial part of software development is testing. It should ensure us, that our code is correct, works according to given specs, etc. There are many kinds of tests: unit tests, integration, functional. In general you should try to test the smallest possible subset of your code and be able to check the state of the objects after the test.

This seems as rather easy task, but what if you have an integration end-to-end test to perform? In most cases asserting state in integration test is rather hard due to multiple systems interoperability. Let’s focus on a specific situation.

What I needed to do the other day was write some integration test for Jms based system. The processing pipeline is easy:

  • fetch object from DB
  • process it
  • publish on JMS

some other system (X-system) polls JMS:

  • if message is found
  • fetch it (message disappears from the JMS queue)
  • do sth with it
  • Looks simple but since I didn’t have any sane access to the X-system I wanted to be sure that my object was actually put into the queue. It was not acceptable to subscribe to the queue and fetch that object in my test – it would dusrupt the flow of the whole process.

    Fortunately I’ve been using ActiveMQ and since it offers a thing called Advisory Messages I’ve decided to use just them.

    What are advisory messages? They are a set of administrative messages that are generated on a specific event, like message consumption, message delivery, topic destruction, and many more. Each type of message is delivered to a separate topic – prefixed with ActiveMQ.Advisory. Since generation of such messages may be an overhead in production systems these features are turned off by default. You need to enable specific type of advisory message for a specific jms destination. You can do this with ths configuration change to activemq.xml

    <destinationPolicy>
       <policyMap>
          <policyEntries>
            <policyEntry queue="my/test/queue" advisoryForDelivery="true" advisoryForConsumed="true"/>                                                   
            <policyEntry topic=">" producerFlowControl="true" memoryLimit="1mb">
              <pendingSubscriberPolicy>
                <vmCursor />
              </pendingSubscriberPolicy>
            </policyEntry>
          </policyEntries>
        </policyMap>
    </destinationPolicy>
    

    As you can see, I’ve specified which advisories I want enabled. The full list of available advisories can be found here.

    Since I wanted to read messages from that topic I’ve added the following configuration to my spring context – there is one destination bean for inserting messages and one bean for advisory topic.

    <bean id="testQueue" class="org.apache.activemq.command.ActiveMQQueue" autowire="constructor">
        <constructor-arg value="my/test/queue" />
    </bean>
    
    <bean id="deliveredToTestQueueAdvisory" class="org.apache.activemq.command.ActiveMQTopic" autowire="constructor">
        <constructor-arg value="ActiveMQ.Advisory.MessageDelivered.Queue.my/test/queue" />
    </bean>
    

    Thanks to this configuration I’ve been able to check that my message was actually delivered to the queue. There’ve been no need to worry about race conditions in consuming the message from original queue – if the X-system read the message, I’d be unable to determine if it has ever been in JMS at all.

    What’s not so nice about that:

    • advisory messages can be thought of as counters rather than debugging information
    • they don’t contain any data that would allow us to match advisory message to the original message – thou you could correlate by timestamp

    All in all, it’s a good tool to have! But perhaps you have some other thoughts on this subject? How do you test JMS?

You May Also Like

OVal – validate your models quickly and effortlessly!

Some time ago one of the projects at work required me to validate some Java POJOs. Theses were my model classes and I've been creating them from incoming WebService requests. One would say that XSD would be sufficient for the task, for parts of this va...Some time ago one of the projects at work required me to validate some Java POJOs. Theses were my model classes and I've been creating them from incoming WebService requests. One would say that XSD would be sufficient for the task, for parts of this va...

Integration testing custom validation constraints in Jersey 2

I recently joined a team trying to switch a monolithic legacy system into set of RESTful services in Java. They decided to use latest 2.x version of Jersey as a REST container which was not a first choice for me, since I’m not a big fan of JSR-* specs. But now I must admit that JAX-RS 2.x is doing things right: requires almost zero boilerplate code, support auto-discovery of features and prefers convention over configuration like other modern frameworks. Since the spec is still young, it’s hard to find good tutorials and kick-off projects with some working code. I created jersey2-starter project on GitHub which can be used as starting point for your own production-ready RESTful service. In this post I’d like to cover how to implement and integration test your own validation constraints of REST resources.

Custom constraints

One of the issues which bothers me when coding REST in Java is littering your class model with annotations. Suppose you want to build a simple Todo list REST service, when using Jackson, validation and Spring Data, you can easily end up with this as your entity class:

@Document
public class Todo {
    private Long id;
    @NotNull
    private String description;
    @NotNull
    private Boolean completed;
    @NotNull
    private DateTime dueDate;

    @JsonCreator
    public Todo(@JsonProperty("description") String description, @JsonProperty("dueDate") DateTime dueDate) {
        this.description = description;
        this.dueDate = dueDate;
        this.completed = false;
    }
    // getters and setters
}

Your domain model is now effectively blured by messy annotations almost everywhere. Let’s see what we can do with validation constraints (@NotNulls). Some may say that you could introduce some DTO layer with own validation rules, but it conflicts for me with pure REST API design, which stands that you operate on resources which should map to your domain classes. On the other hand - what does it mean that Todo object is valid? When you create a Todo you should provide a description and due date, but what when you’re updating? You should be able to change any of description, due date (postponing) and completion flag (marking as done) - but you should provide at least one of these as valid modification. So my idea is to introduce custom validation constraints, different ones for creation and modification:

@Target({TYPE, PARAMETER})
@Retention(RUNTIME)
@Constraint(validatedBy = ValidForCreation.Validator.class)
public @interface ValidForCreation {
    //...
    class Validator implements ConstraintValidator<ValidForCreation, Todo> {
    /...
        @Override
        public boolean isValid(Todo todo, ConstraintValidatorContext constraintValidatorContext) {
            return todo != null
                && todo.getId() == null
                && todo.getDescription() != null
                && todo.getDueDate() != null;
        }
    }
}

@Target({TYPE, PARAMETER})
@Retention(RUNTIME)
@Constraint(validatedBy = ValidForModification.Validator.class)
public @interface ValidForModification {
    //...
    class Validator implements ConstraintValidator<ValidForModification, Todo> {
    /...
        @Override
        public boolean isValid(Todo todo, ConstraintValidatorContext constraintValidatorContext) {
            return todo != null
                && todo.getId() == null
                && (todo.getDescription() != null || todo.getDueDate() != null || todo.isCompleted() != null);
        }
    }
}

And now you can move validation annotations to the definition of a REST endpoint:

@POST
@Consumes(APPLICATION_JSON)
public Response create(@ValidForCreation Todo todo) {...}

@PUT
@Consumes(APPLICATION_JSON)
public Response update(@ValidForModification Todo todo) {...}

And now you can remove those NotNulls from your model.

Integration testing

There are in general two approaches to integration testing:

  • test is being run on separate JVM than the app, which is deployed on some other integration environment
  • test deploys the application programmatically in the setup block.

Both of these have their pros and cons, but for small enough servoces, I personally prefer the second approach. It’s much easier to setup and you have only one JVM started, which makes debugging really easy. You can use a generic framework like Arquillian for starting your application in a container environment, but I prefer simple solutions and just use emdedded Jetty. To make test setup 100% production equivalent, I’m creating full Jetty’s WebAppContext and have to resolve all runtime dependencies for Jersey auto-discovery to work. This can be simply achieved with Maven resolved from Shrinkwrap - an Arquillian subproject:

    WebAppContext webAppContext = new WebAppContext();
    webAppContext.setResourceBase("src/main/webapp");
    webAppContext.setContextPath("/");
    File[] mavenLibs = Maven.resolver().loadPomFromFile("pom.xml")
                .importCompileAndRuntimeDependencies()
                .resolve().withTransitivity().asFile();
    for (File file: mavenLibs) {
        webAppContext.getMetaData().addWebInfJar(new FileResource(file.toURI()));
    }
    webAppContext.getMetaData().addContainerResource(new FileResource(new File("./target/classes").toURI()));

    webAppContext.setConfigurations(new Configuration[] {
        new AnnotationConfiguration(),
        new WebXmlConfiguration(),
        new WebInfConfiguration()
    });
    server.setHandler(webAppContext);

(this Stackoverflow thread inspired me a lot here)

Now it’s time for the last part of the post: parametrizing our integration tests. Since we want to test validation constraints, there are many edge paths to check (and make your code coverage close to 100%). Writing one test per each case could be a bad idea. Among the many solutions for JUnit I’m most convinced to the Junit Params by Pragmatists team. It’s really simple and have nice concept of JQuery-like helper for creating providers. Here is my tests code (I’m also using builder pattern here to create various kinds of Todos):

@Test
@Parameters(method = "provideInvalidTodosForCreation")
public void shouldRejectInvalidTodoWhenCreate(Todo todo) {
    Response response = createTarget().request().post(Entity.json(todo));

    assertThat(response.getStatus()).isEqualTo(BAD_REQUEST.getStatusCode());
}

private static Object[] provideInvalidTodosForCreation() {
    return $(
        new TodoBuilder().withDescription("test").build(),
        new TodoBuilder().withDueDate(DateTime.now()).build(),
        new TodoBuilder().withId(123L).build(),
        new TodoBuilder().build()
    );
}

OK, enough of reading, feel free to clone the project and start writing your REST services!

I recently joined a team trying to switch a monolithic legacy system into set of RESTful services in Java. They decided to use latest 2.x version of Jersey as a REST container which was not a first choice for me, since I’m not a big fan of JSR-* specs. But now I must admit that JAX-RS 2.x is doing things right: requires almost zero boilerplate code, support auto-discovery of features and prefers convention over configuration like other modern frameworks. Since the spec is still young, it’s hard to find good tutorials and kick-off projects with some working code. I created jersey2-starter project on GitHub which can be used as starting point for your own production-ready RESTful service. In this post I’d like to cover how to implement and integration test your own validation constraints of REST resources.

Custom constraints

One of the issues which bothers me when coding REST in Java is littering your class model with annotations. Suppose you want to build a simple Todo list REST service, when using Jackson, validation and Spring Data, you can easily end up with this as your entity class:

@Document
public class Todo {
    private Long id;
    @NotNull
    private String description;
    @NotNull
    private Boolean completed;
    @NotNull
    private DateTime dueDate;

    @JsonCreator
    public Todo(@JsonProperty("description") String description, @JsonProperty("dueDate") DateTime dueDate) {
        this.description = description;
        this.dueDate = dueDate;
        this.completed = false;
    }
    // getters and setters
}

Your domain model is now effectively blured by messy annotations almost everywhere. Let’s see what we can do with validation constraints (@NotNulls). Some may say that you could introduce some DTO layer with own validation rules, but it conflicts for me with pure REST API design, which stands that you operate on resources which should map to your domain classes. On the other hand - what does it mean that Todo object is valid? When you create a Todo you should provide a description and due date, but what when you’re updating? You should be able to change any of description, due date (postponing) and completion flag (marking as done) - but you should provide at least one of these as valid modification. So my idea is to introduce custom validation constraints, different ones for creation and modification:

@Target({TYPE, PARAMETER})
@Retention(RUNTIME)
@Constraint(validatedBy = ValidForCreation.Validator.class)
public @interface ValidForCreation {
    //...
    class Validator implements ConstraintValidator<ValidForCreation, Todo> {
    /...
        @Override
        public boolean isValid(Todo todo, ConstraintValidatorContext constraintValidatorContext) {
            return todo != null
                && todo.getId() == null
                && todo.getDescription() != null
                && todo.getDueDate() != null;
        }
    }
}

@Target({TYPE, PARAMETER})
@Retention(RUNTIME)
@Constraint(validatedBy = ValidForModification.Validator.class)
public @interface ValidForModification {
    //...
    class Validator implements ConstraintValidator<ValidForModification, Todo> {
    /...
        @Override
        public boolean isValid(Todo todo, ConstraintValidatorContext constraintValidatorContext) {
            return todo != null
                && todo.getId() == null
                && (todo.getDescription() != null || todo.getDueDate() != null || todo.isCompleted() != null);
        }
    }
}

And now you can move validation annotations to the definition of a REST endpoint:

@POST
@Consumes(APPLICATION_JSON)
public Response create(@ValidForCreation Todo todo) {...}

@PUT
@Consumes(APPLICATION_JSON)
public Response update(@ValidForModification Todo todo) {...}

And now you can remove those NotNulls from your model.

Integration testing

There are in general two approaches to integration testing:

  • test is being run on separate JVM than the app, which is deployed on some other integration environment
  • test deploys the application programmatically in the setup block.

Both of these have their pros and cons, but for small enough servoces, I personally prefer the second approach. It’s much easier to setup and you have only one JVM started, which makes debugging really easy. You can use a generic framework like Arquillian for starting your application in a container environment, but I prefer simple solutions and just use emdedded Jetty. To make test setup 100% production equivalent, I’m creating full Jetty’s WebAppContext and have to resolve all runtime dependencies for Jersey auto-discovery to work. This can be simply achieved with Maven resolved from Shrinkwrap - an Arquillian subproject:

    WebAppContext webAppContext = new WebAppContext();
    webAppContext.setResourceBase("src/main/webapp");
    webAppContext.setContextPath("/");
    File[] mavenLibs = Maven.resolver().loadPomFromFile("pom.xml")
                .importCompileAndRuntimeDependencies()
                .resolve().withTransitivity().asFile();
    for (File file: mavenLibs) {
        webAppContext.getMetaData().addWebInfJar(new FileResource(file.toURI()));
    }
    webAppContext.getMetaData().addContainerResource(new FileResource(new File("./target/classes").toURI()));

    webAppContext.setConfigurations(new Configuration[] {
        new AnnotationConfiguration(),
        new WebXmlConfiguration(),
        new WebInfConfiguration()
    });
    server.setHandler(webAppContext);

(this Stackoverflow thread inspired me a lot here)

Now it’s time for the last part of the post: parametrizing our integration tests. Since we want to test validation constraints, there are many edge paths to check (and make your code coverage close to 100%). Writing one test per each case could be a bad idea. Among the many solutions for JUnit I’m most convinced to the Junit Params by Pragmatists team. It’s really simple and have nice concept of JQuery-like helper for creating providers. Here is my tests code (I’m also using builder pattern here to create various kinds of Todos):

@Test
@Parameters(method = "provideInvalidTodosForCreation")
public void shouldRejectInvalidTodoWhenCreate(Todo todo) {
    Response response = createTarget().request().post(Entity.json(todo));

    assertThat(response.getStatus()).isEqualTo(BAD_REQUEST.getStatusCode());
}

private static Object[] provideInvalidTodosForCreation() {
    return $(
        new TodoBuilder().withDescription("test").build(),
        new TodoBuilder().withDueDate(DateTime.now()).build(),
        new TodoBuilder().withId(123L).build(),
        new TodoBuilder().build()
    );
}

OK, enough of reading, feel free to clone the project and start writing your REST services!