Here are the slides from the talk a gave yesterday. If you have any
questions, please ask.
Use XMLs not groovy scripts with db migration plugin!
<!--:pl-->Agile w biznesie 2012 - podsumowanie<!--:--><!--:en-->Agile in business 2012 - a review<!--:-->
You May Also Like
Integration testing custom validation constraints in Jersey 2
- byPiotr Jagielski
- October 26, 2013
I recently joined a team trying to switch a monolithic legacy system into set of RESTful services in Java. They decided to use latest 2.x version of Jersey as a REST container which was not a first choice for me, since I’m not a big fan of JSR-* specs. But now I must admit that JAX-RS 2.x is doing things right: requires almost zero boilerplate code, support auto-discovery of features and prefers convention over configuration like other modern frameworks. Since the spec is still young, it’s hard to find good tutorials and kick-off projects with some working code. I created jersey2-starter project on GitHub which can be used as starting point for your own production-ready RESTful service. In this post I’d like to cover how to implement and integration test your own validation constraints of REST resources.
Custom constraints
One of the issues which bothers me when coding REST in Java is littering your class model with annotations. Suppose you want to build a simple Todo list REST service, when using Jackson, validation and Spring Data, you can easily end up with this as your entity class:
@Document
public class Todo {
private Long id;
@NotNull
private String description;
@NotNull
private Boolean completed;
@NotNull
private DateTime dueDate;
@JsonCreator
public Todo(@JsonProperty("description") String description, @JsonProperty("dueDate") DateTime dueDate) {
this.description = description;
this.dueDate = dueDate;
this.completed = false;
}
// getters and setters
}
Your domain model is now effectively blured by messy annotations almost everywhere. Let’s see what we can do with validation constraints (@NotNulls). Some may say that you could introduce some DTO layer with own validation rules, but it conflicts for me with pure REST API design, which stands that you operate on resources which should map to your domain classes. On the other hand - what does it mean that Todo object is valid? When you create a Todo you should provide a description and due date, but what when you’re updating? You should be able to change any of description, due date (postponing) and completion flag (marking as done) - but you should provide at least one of these as valid modification. So my idea is to introduce custom validation constraints, different ones for creation and modification:
@Target({TYPE, PARAMETER})
@Retention(RUNTIME)
@Constraint(validatedBy = ValidForCreation.Validator.class)
public @interface ValidForCreation {
//...
class Validator implements ConstraintValidator<ValidForCreation, Todo> {
/...
@Override
public boolean isValid(Todo todo, ConstraintValidatorContext constraintValidatorContext) {
return todo != null
&& todo.getId() == null
&& todo.getDescription() != null
&& todo.getDueDate() != null;
}
}
}
@Target({TYPE, PARAMETER})
@Retention(RUNTIME)
@Constraint(validatedBy = ValidForModification.Validator.class)
public @interface ValidForModification {
//...
class Validator implements ConstraintValidator<ValidForModification, Todo> {
/...
@Override
public boolean isValid(Todo todo, ConstraintValidatorContext constraintValidatorContext) {
return todo != null
&& todo.getId() == null
&& (todo.getDescription() != null || todo.getDueDate() != null || todo.isCompleted() != null);
}
}
}
And now you can move validation annotations to the definition of a REST endpoint:
@POST
@Consumes(APPLICATION_JSON)
public Response create(@ValidForCreation Todo todo) {...}
@PUT
@Consumes(APPLICATION_JSON)
public Response update(@ValidForModification Todo todo) {...}
And now you can remove those NotNulls from your model.
Integration testing
There are in general two approaches to integration testing:
- test is being run on separate JVM than the app, which is deployed on some other integration environment
- test deploys the application programmatically in the setup block.
Both of these have their pros and cons, but for small enough servoces, I personally prefer the second approach. It’s much easier to setup and you have only one JVM started, which makes debugging really easy. You can use a generic framework like Arquillian for starting your application in a container environment, but I prefer simple solutions and just use emdedded Jetty. To make test setup 100% production equivalent, I’m creating full Jetty’s WebAppContext and have to resolve all runtime dependencies for Jersey auto-discovery to work. This can be simply achieved with Maven resolved from Shrinkwrap - an Arquillian subproject:
WebAppContext webAppContext = new WebAppContext();
webAppContext.setResourceBase("src/main/webapp");
webAppContext.setContextPath("/");
File[] mavenLibs = Maven.resolver().loadPomFromFile("pom.xml")
.importCompileAndRuntimeDependencies()
.resolve().withTransitivity().asFile();
for (File file: mavenLibs) {
webAppContext.getMetaData().addWebInfJar(new FileResource(file.toURI()));
}
webAppContext.getMetaData().addContainerResource(new FileResource(new File("./target/classes").toURI()));
webAppContext.setConfigurations(new Configuration[] {
new AnnotationConfiguration(),
new WebXmlConfiguration(),
new WebInfConfiguration()
});
server.setHandler(webAppContext);
(this Stackoverflow thread inspired me a lot here)
Now it’s time for the last part of the post: parametrizing our integration tests. Since we want to test validation constraints, there are many edge paths to check (and make your code coverage close to 100%). Writing one test per each case could be a bad idea. Among the many solutions for JUnit I’m most convinced to the Junit Params by Pragmatists team. It’s really simple and have nice concept of JQuery-like helper for creating providers. Here is my tests code (I’m also using builder pattern here to create various kinds of Todos):
@Test
@Parameters(method = "provideInvalidTodosForCreation")
public void shouldRejectInvalidTodoWhenCreate(Todo todo) {
Response response = createTarget().request().post(Entity.json(todo));
assertThat(response.getStatus()).isEqualTo(BAD_REQUEST.getStatusCode());
}
private static Object[] provideInvalidTodosForCreation() {
return $(
new TodoBuilder().withDescription("test").build(),
new TodoBuilder().withDueDate(DateTime.now()).build(),
new TodoBuilder().withId(123L).build(),
new TodoBuilder().build()
);
}
OK, enough of reading, feel free to clone the project and start writing your REST services!
I recently joined a team trying to switch a monolithic legacy system into set of RESTful services in Java. They decided to use latest 2.x version of Jersey as a REST container which was not a first choice for me, since I’m not a big fan of JSR-* specs. But now I must admit that JAX-RS 2.x is doing things right: requires almost zero boilerplate code, support auto-discovery of features and prefers convention over configuration like other modern frameworks. Since the spec is still young, it’s hard to find good tutorials and kick-off projects with some working code. I created jersey2-starter project on GitHub which can be used as starting point for your own production-ready RESTful service. In this post I’d like to cover how to implement and integration test your own validation constraints of REST resources.
Custom constraints
One of the issues which bothers me when coding REST in Java is littering your class model with annotations. Suppose you want to build a simple Todo list REST service, when using Jackson, validation and Spring Data, you can easily end up with this as your entity class:
@Document
public class Todo {
private Long id;
@NotNull
private String description;
@NotNull
private Boolean completed;
@NotNull
private DateTime dueDate;
@JsonCreator
public Todo(@JsonProperty("description") String description, @JsonProperty("dueDate") DateTime dueDate) {
this.description = description;
this.dueDate = dueDate;
this.completed = false;
}
// getters and setters
}
Your domain model is now effectively blured by messy annotations almost everywhere. Let’s see what we can do with validation constraints (@NotNulls). Some may say that you could introduce some DTO layer with own validation rules, but it conflicts for me with pure REST API design, which stands that you operate on resources which should map to your domain classes. On the other hand - what does it mean that Todo object is valid? When you create a Todo you should provide a description and due date, but what when you’re updating? You should be able to change any of description, due date (postponing) and completion flag (marking as done) - but you should provide at least one of these as valid modification. So my idea is to introduce custom validation constraints, different ones for creation and modification:
@Target({TYPE, PARAMETER})
@Retention(RUNTIME)
@Constraint(validatedBy = ValidForCreation.Validator.class)
public @interface ValidForCreation {
//...
class Validator implements ConstraintValidator<ValidForCreation, Todo> {
/...
@Override
public boolean isValid(Todo todo, ConstraintValidatorContext constraintValidatorContext) {
return todo != null
&& todo.getId() == null
&& todo.getDescription() != null
&& todo.getDueDate() != null;
}
}
}
@Target({TYPE, PARAMETER})
@Retention(RUNTIME)
@Constraint(validatedBy = ValidForModification.Validator.class)
public @interface ValidForModification {
//...
class Validator implements ConstraintValidator<ValidForModification, Todo> {
/...
@Override
public boolean isValid(Todo todo, ConstraintValidatorContext constraintValidatorContext) {
return todo != null
&& todo.getId() == null
&& (todo.getDescription() != null || todo.getDueDate() != null || todo.isCompleted() != null);
}
}
}
And now you can move validation annotations to the definition of a REST endpoint:
@POST
@Consumes(APPLICATION_JSON)
public Response create(@ValidForCreation Todo todo) {...}
@PUT
@Consumes(APPLICATION_JSON)
public Response update(@ValidForModification Todo todo) {...}
And now you can remove those NotNulls from your model.
Integration testing
There are in general two approaches to integration testing:
- test is being run on separate JVM than the app, which is deployed on some other integration environment
- test deploys the application programmatically in the setup block.
Both of these have their pros and cons, but for small enough servoces, I personally prefer the second approach. It’s much easier to setup and you have only one JVM started, which makes debugging really easy. You can use a generic framework like Arquillian for starting your application in a container environment, but I prefer simple solutions and just use emdedded Jetty. To make test setup 100% production equivalent, I’m creating full Jetty’s WebAppContext and have to resolve all runtime dependencies for Jersey auto-discovery to work. This can be simply achieved with Maven resolved from Shrinkwrap - an Arquillian subproject:
WebAppContext webAppContext = new WebAppContext();
webAppContext.setResourceBase("src/main/webapp");
webAppContext.setContextPath("/");
File[] mavenLibs = Maven.resolver().loadPomFromFile("pom.xml")
.importCompileAndRuntimeDependencies()
.resolve().withTransitivity().asFile();
for (File file: mavenLibs) {
webAppContext.getMetaData().addWebInfJar(new FileResource(file.toURI()));
}
webAppContext.getMetaData().addContainerResource(new FileResource(new File("./target/classes").toURI()));
webAppContext.setConfigurations(new Configuration[] {
new AnnotationConfiguration(),
new WebXmlConfiguration(),
new WebInfConfiguration()
});
server.setHandler(webAppContext);
(this Stackoverflow thread inspired me a lot here)
Now it’s time for the last part of the post: parametrizing our integration tests. Since we want to test validation constraints, there are many edge paths to check (and make your code coverage close to 100%). Writing one test per each case could be a bad idea. Among the many solutions for JUnit I’m most convinced to the Junit Params by Pragmatists team. It’s really simple and have nice concept of JQuery-like helper for creating providers. Here is my tests code (I’m also using builder pattern here to create various kinds of Todos):
@Test
@Parameters(method = "provideInvalidTodosForCreation")
public void shouldRejectInvalidTodoWhenCreate(Todo todo) {
Response response = createTarget().request().post(Entity.json(todo));
assertThat(response.getStatus()).isEqualTo(BAD_REQUEST.getStatusCode());
}
private static Object[] provideInvalidTodosForCreation() {
return $(
new TodoBuilder().withDescription("test").build(),
new TodoBuilder().withDueDate(DateTime.now()).build(),
new TodoBuilder().withId(123L).build(),
new TodoBuilder().build()
);
}
OK, enough of reading, feel free to clone the project and start writing your REST services!
Spock basics
- byRafał Nowak
- September 7, 2013
So how can I start?
What is great with Spock is fact that we can use it to test both Groovy projects and pure Java projects or even mixed projects.
Let's go!
import spock.lang.*
class UserSpec extends Specification {
}
Now we can proceed to defining test fixtures and test methods.class UserSpec extends Specification {
User user
Document document
def setup() {
user = new User()
document = DocumentTestFactory.createDocumentWithTitle("doc1")
}
def cleanup() {
}
}
Of course we can use field initialization for instantiating test objects:class UserSpec extends Specification {
User user = new User()
Document document = DocumentTestFactory.createDocumentWithTitle("doc1")
def setup() {
}
def cleanup() {
}
}
First tests
class UserSpec extends Specification {
// ...
def "should assign coment to user"() {
// ...
}
}
With such naming convention we can write real specification and include details about specified behaviour in method name, what is very convenient when reading test reports and analyzing errors.Test method (also called feature method) is logically divided into few blocks, each with its own purpose. Blocks are defined like labels in Java (but they are transformed with Groovy AST transform features) and some of them must be put in code in specific order.
Most basic and common schema for Spock test is:
class UserSpec extends Specification {
// ...
def "should assign coment to user"() {
given:
// do initialization of test objects
when:
// perform actions to be tested
then:
// collect and analyze results
}
}
But there are more blocks like:
- setup
- expect
- where
- cleanup
given block
class UserSpec extends Specification {
// ...
def "should add project to user and mark user as project's owner"() {
given:
User user = new User()
Project project = ProjectTestFactory.createProjectWithName("simple project")
// ...
}
}
In this code given block contains initialization of test objects and nothing more. We create simple user without any specified attributes and project with given name. In case when some of these objects could be reused in more feature methods, it could be worth putting initialization in setup method.
when and then blocks
class UserSpec extends Specification {
// ...
def "should assign user to comment when adding comment to user"() {
given:
User user = new User()
Comment comment = new Comment()
when:
user.addComment(comment)
then:
comment.getUserWhoCreatedComment().equals(user)
}
// ...
}
In when block there is a call of tested method and nothing more. After we are sure our action was performed, we can check for desired conditions in then block.
Then block is very well structured and its every line is treated by Spock as boolean statement. That means, Spock expects that we write instructions containing comparisons and expressions returning true or false, so we can create then block with such statements:
user.getName() == "John"
user.getAge() == 40
!user.isEnabled()
Each of lines will be treated as single assertion and will be evaluated by Spock.Sometimes we expect that our method throws an exception under given circumstances. We can write test for it with use of thrown method:
class CommentSpec extends Specification {
def "should throw exception when adding null document to comment"() {
given:
Comment comment = new Comment()
when:
comment.setCommentedDocument(null)
then:
thrown(RuntimeException)
}
}
In this test we want to make sure that passing incorrect parameters is correctly handled by tested method and that method throws an exception in response. In case you want to be certain that method does not throw particular exception, simply use notThrown method.
expect block
def "should create user with given name"() {
given:
User user = UserTestFactory.createUser("john doe")
expect:
user.getName() == "john doe"
}
More blocks!
That were very simple tests with standard Spock test layout and canonical divide into given/when/then parts. But Spock offers more possibilities in writing tests and provides more blocks.
setup/cleanup blocks
where - easy way to create readable parameterized tests
def "should successfully validate emails with valid syntax"() {
expect:
emailValidator.validate(email) == true
where:
email }
In this example, Spock creates variable called email which is used when calling method being tested. Internally feature method is called once, but framework iterates over given values and calls expect/when block as many times as there are values (however, if we use @Unroll annotation Spock can create separate run for each of given values, more about it in one of next examples).
Now, lets assume that we want our feature method to test both successful and failure validations. To achieve that goal we can create few parameterized variables for both input parameter and expected result. Here is a little example:
def "should perform validation of email addresses"() {
expect:
emailValidator.validate(email) == result
where:
email result }
Well, it looks nice, but Spock can do much better. It offers tabular format of defining parameters for test what is much more readable and natural. Lets take a look:def "should perform validation of email addresses"() {
expect:
emailValidator.validate(email) == result
where:
email | result
"WTF" | false
"@domain" | false
"foo@bar.com" | true
"a@test" | false
}
In this code, each column of our "table" is treated as a separate variable and rows are values for subsequent test iterations.Another useful feature of Spock during parameterizing test is its ability to "unroll" each parameterized test. Feature method from previous example could be defined as (the body stays the same, so I do not repeat it):
@Unroll("should validate email #email")
def "should perform validation of email addresses"() {
// ...
}
With that annotation, Spock generate few methods each with its own name and run them separately. We can use symbols from where blocks in @Unroll argument by preceding it with '#' sign what is a signal to Spock to use it in generated method name.What next?
Meet Sputnik – static code analyser for Gerrit
- byTomasz Kalkosiński
- May 26, 2014