Opportunity to grow

In 2022, in addition to the work we do for our clients, we put our heads together and organized events and activities that would help us further develop our skills and knowledge, while providing a healthy break from our daily tasks. What follows is a brief account of what we planned, what worked and what didn’t.

Book Santa

We started the new year with gifts. Everyone could order a book, an actual physical book made of paper and ink, the only condition was that it be “at least a little work-related”. The idea was well received and orders soon poured in. The logistics involved in collecting titles from 130 people, ordering and then sending them to those who work more often remotely proved something of a challenge, but somehow we managed. Only one book from the UK got irretrievably stuck in customs (thanks, Brexit!). I don’t know how many books were read – since they were gifts, it didn’t seem appropriate to ask! – but I do know that one of them is currently being used as a monitor stand. An unexpectedly fascinating side effect of the scheme was being able to take a look at the freely available excel sheet and see who had ordered what and to learn about books you’d previously been unaware of.

Flashtalks

We meet once a month on a Friday to hear three people from the company, usually developers, give short presentations. Ensuring that every month there are enough volunteers willing to talk about something technical or somehow related to our work is no mean feat. Persuading, encouraging, twisting the arms of busy colleagues is a major challenge for the organizers of these talks. Occasionally, these talks cover less technical areas and we get the chance to learn about advantages of giving and receiving feedback, money saving and how to become a conference speaker. The purpose of Flashtalks is to both share knowledge and create a space where presentation skills can be practiced and honed in front of a larger audience. In 2022 there were 29 speakers (seven of whom spoke twice) and 36 presentations altogether. Topics ranged from “How to monitor applications after deployment” to “Will Niagara Files work on the Mississippi” and “DIY simple lab power supply”. Both novice developers as well as senior conference veterans share their wisdom. One of the great things about Flashtalks is the post-talk questionnaire, which always provides us with valuable feedback on what went well and what could be done better. Each Flashtalk ends with a quiz, which introduces an element of gamification and prevents Flashtalks from becoming merely a chance to take a break from work.

One of the aims of the Flashki is to create new conference speakers. This year Monika Fus made her debut at Confitura, an event very much close to our hearts, where she spoke about “L10n, i18n and t9n in the JVM world”. Other speakers included Piotr Fus with his presentation “How to get started with metrics”, Maciek Próchniak with “Well, it’s time to synchronize watches” and Dominik Przybysz with “OOP Revisited”. The whole team traveled around Poland and beyond with their presentations. Check them out here: https://touk.pl/talks/2022. While we had hoped there would be more newcomers in 2022, we are confident that 2023 will see a greater number of conference debuts.

Dojo

TouK Dojo is a more hands-on learning initiative that one of the developers came up with this year. We hold technical workshops for each other and meetings where we solve problems, often algorithmic. It’s a good opportunity to bounce ideas off each other about how to do things, try out tools, ask questions and work on real examples. Meetings usually take place once a week, live and online.

Blog

In April, as a grassroots initiative, a few of us set a goal to post monthly on our company blog. We wanted to restore it to its former glory, or perhaps just revive it after the drought years of the pandemic. We almost succeeded, with our combined efforts resulting in six posts about hackathons, a summary of the first post-pandemic Confitura, and of course some technical posts. We are now thinking about how to take the initiative further. It seems that the valuable content we could share is there, but we need to find the time and a way to overcome writer’s block.

Hackathons

Last year we organized a couple of hackathons: one in the spring and one in the autumn. We had initially considered having four in the space of a year, but that turned out to be a bit too ambitious, given the amount of work we do for our clients and the number of ideas to hack. The general framework developed in battle is: two days in our office, any project, any team composition, pizza and snacks for all. You can read detailed descriptions of what we achieved here: https://touk.pl/blog/2022/05/12/touk-hackathon-toukathon-april-2022/ and https://touk.pl/blog/2022/11/28/toukathon-october-2022/. In spring sixteen people coded together, in autumn eight.

Books

It has always been possible for any employee to order books for our shared library. Perhaps because of the number of books gifted at the start of the year, the number of books ordered during the year was a little smaller than normal. Still, our library has now been enriched with:

  • Diagnosing and Changing Organizational Culture: Based on the Competing Values Framework, Kim S. Cameron, Robert E. Quinn
  • The Enterprise Big Data Lake, Alex Gorelik
  • The Java Module System, Nicolai Parlog
  • Bulletproof TLS and PKI 2ed, Ivan Ristić
  • Job crafting nowa metoda budowania zaangażowania i poczucia sensu w pracy, Malwina Puchalska-Kamińska, Agnieszka Łądka- Barańska
  • Functional programming in Scala second edition, Paul Chiusano, Runar Bjarnason
  • Functional Design and Architecture v7, Alexander Granin
  • Functional Event-Driven Architecture, Gabriel Volpe
  • Code, Charles Petzold

Since January is time for resolutions, here are ours. This year we want to continue the Flashki (the first has already taken place on 20 January), we will confidently hack together, and we are already planning a calendar of speeches at conferences.

You May Also Like

Integration testing custom validation constraints in Jersey 2

I recently joined a team trying to switch a monolithic legacy system into set of RESTful services in Java. They decided to use latest 2.x version of Jersey as a REST container which was not a first choice for me, since I’m not a big fan of JSR-* specs. But now I must admit that JAX-RS 2.x is doing things right: requires almost zero boilerplate code, support auto-discovery of features and prefers convention over configuration like other modern frameworks. Since the spec is still young, it’s hard to find good tutorials and kick-off projects with some working code. I created jersey2-starter project on GitHub which can be used as starting point for your own production-ready RESTful service. In this post I’d like to cover how to implement and integration test your own validation constraints of REST resources.

Custom constraints

One of the issues which bothers me when coding REST in Java is littering your class model with annotations. Suppose you want to build a simple Todo list REST service, when using Jackson, validation and Spring Data, you can easily end up with this as your entity class:

@Document
public class Todo {
    private Long id;
    @NotNull
    private String description;
    @NotNull
    private Boolean completed;
    @NotNull
    private DateTime dueDate;

    @JsonCreator
    public Todo(@JsonProperty("description") String description, @JsonProperty("dueDate") DateTime dueDate) {
        this.description = description;
        this.dueDate = dueDate;
        this.completed = false;
    }
    // getters and setters
}

Your domain model is now effectively blured by messy annotations almost everywhere. Let’s see what we can do with validation constraints (@NotNulls). Some may say that you could introduce some DTO layer with own validation rules, but it conflicts for me with pure REST API design, which stands that you operate on resources which should map to your domain classes. On the other hand - what does it mean that Todo object is valid? When you create a Todo you should provide a description and due date, but what when you’re updating? You should be able to change any of description, due date (postponing) and completion flag (marking as done) - but you should provide at least one of these as valid modification. So my idea is to introduce custom validation constraints, different ones for creation and modification:

@Target({TYPE, PARAMETER})
@Retention(RUNTIME)
@Constraint(validatedBy = ValidForCreation.Validator.class)
public @interface ValidForCreation {
    //...
    class Validator implements ConstraintValidator<ValidForCreation, Todo> {
    /...
        @Override
        public boolean isValid(Todo todo, ConstraintValidatorContext constraintValidatorContext) {
            return todo != null
                && todo.getId() == null
                && todo.getDescription() != null
                && todo.getDueDate() != null;
        }
    }
}

@Target({TYPE, PARAMETER})
@Retention(RUNTIME)
@Constraint(validatedBy = ValidForModification.Validator.class)
public @interface ValidForModification {
    //...
    class Validator implements ConstraintValidator<ValidForModification, Todo> {
    /...
        @Override
        public boolean isValid(Todo todo, ConstraintValidatorContext constraintValidatorContext) {
            return todo != null
                && todo.getId() == null
                && (todo.getDescription() != null || todo.getDueDate() != null || todo.isCompleted() != null);
        }
    }
}

And now you can move validation annotations to the definition of a REST endpoint:

@POST
@Consumes(APPLICATION_JSON)
public Response create(@ValidForCreation Todo todo) {...}

@PUT
@Consumes(APPLICATION_JSON)
public Response update(@ValidForModification Todo todo) {...}

And now you can remove those NotNulls from your model.

Integration testing

There are in general two approaches to integration testing:

  • test is being run on separate JVM than the app, which is deployed on some other integration environment
  • test deploys the application programmatically in the setup block.

Both of these have their pros and cons, but for small enough servoces, I personally prefer the second approach. It’s much easier to setup and you have only one JVM started, which makes debugging really easy. You can use a generic framework like Arquillian for starting your application in a container environment, but I prefer simple solutions and just use emdedded Jetty. To make test setup 100% production equivalent, I’m creating full Jetty’s WebAppContext and have to resolve all runtime dependencies for Jersey auto-discovery to work. This can be simply achieved with Maven resolved from Shrinkwrap - an Arquillian subproject:

    WebAppContext webAppContext = new WebAppContext();
    webAppContext.setResourceBase("src/main/webapp");
    webAppContext.setContextPath("/");
    File[] mavenLibs = Maven.resolver().loadPomFromFile("pom.xml")
                .importCompileAndRuntimeDependencies()
                .resolve().withTransitivity().asFile();
    for (File file: mavenLibs) {
        webAppContext.getMetaData().addWebInfJar(new FileResource(file.toURI()));
    }
    webAppContext.getMetaData().addContainerResource(new FileResource(new File("./target/classes").toURI()));

    webAppContext.setConfigurations(new Configuration[] {
        new AnnotationConfiguration(),
        new WebXmlConfiguration(),
        new WebInfConfiguration()
    });
    server.setHandler(webAppContext);

(this Stackoverflow thread inspired me a lot here)

Now it’s time for the last part of the post: parametrizing our integration tests. Since we want to test validation constraints, there are many edge paths to check (and make your code coverage close to 100%). Writing one test per each case could be a bad idea. Among the many solutions for JUnit I’m most convinced to the Junit Params by Pragmatists team. It’s really simple and have nice concept of JQuery-like helper for creating providers. Here is my tests code (I’m also using builder pattern here to create various kinds of Todos):

@Test
@Parameters(method = "provideInvalidTodosForCreation")
public void shouldRejectInvalidTodoWhenCreate(Todo todo) {
    Response response = createTarget().request().post(Entity.json(todo));

    assertThat(response.getStatus()).isEqualTo(BAD_REQUEST.getStatusCode());
}

private static Object[] provideInvalidTodosForCreation() {
    return $(
        new TodoBuilder().withDescription("test").build(),
        new TodoBuilder().withDueDate(DateTime.now()).build(),
        new TodoBuilder().withId(123L).build(),
        new TodoBuilder().build()
    );
}

OK, enough of reading, feel free to clone the project and start writing your REST services!

I recently joined a team trying to switch a monolithic legacy system into set of RESTful services in Java. They decided to use latest 2.x version of Jersey as a REST container which was not a first choice for me, since I’m not a big fan of JSR-* specs. But now I must admit that JAX-RS 2.x is doing things right: requires almost zero boilerplate code, support auto-discovery of features and prefers convention over configuration like other modern frameworks. Since the spec is still young, it’s hard to find good tutorials and kick-off projects with some working code. I created jersey2-starter project on GitHub which can be used as starting point for your own production-ready RESTful service. In this post I’d like to cover how to implement and integration test your own validation constraints of REST resources.

Custom constraints

One of the issues which bothers me when coding REST in Java is littering your class model with annotations. Suppose you want to build a simple Todo list REST service, when using Jackson, validation and Spring Data, you can easily end up with this as your entity class:

@Document
public class Todo {
    private Long id;
    @NotNull
    private String description;
    @NotNull
    private Boolean completed;
    @NotNull
    private DateTime dueDate;

    @JsonCreator
    public Todo(@JsonProperty("description") String description, @JsonProperty("dueDate") DateTime dueDate) {
        this.description = description;
        this.dueDate = dueDate;
        this.completed = false;
    }
    // getters and setters
}

Your domain model is now effectively blured by messy annotations almost everywhere. Let’s see what we can do with validation constraints (@NotNulls). Some may say that you could introduce some DTO layer with own validation rules, but it conflicts for me with pure REST API design, which stands that you operate on resources which should map to your domain classes. On the other hand - what does it mean that Todo object is valid? When you create a Todo you should provide a description and due date, but what when you’re updating? You should be able to change any of description, due date (postponing) and completion flag (marking as done) - but you should provide at least one of these as valid modification. So my idea is to introduce custom validation constraints, different ones for creation and modification:

@Target({TYPE, PARAMETER})
@Retention(RUNTIME)
@Constraint(validatedBy = ValidForCreation.Validator.class)
public @interface ValidForCreation {
    //...
    class Validator implements ConstraintValidator<ValidForCreation, Todo> {
    /...
        @Override
        public boolean isValid(Todo todo, ConstraintValidatorContext constraintValidatorContext) {
            return todo != null
                && todo.getId() == null
                && todo.getDescription() != null
                && todo.getDueDate() != null;
        }
    }
}

@Target({TYPE, PARAMETER})
@Retention(RUNTIME)
@Constraint(validatedBy = ValidForModification.Validator.class)
public @interface ValidForModification {
    //...
    class Validator implements ConstraintValidator<ValidForModification, Todo> {
    /...
        @Override
        public boolean isValid(Todo todo, ConstraintValidatorContext constraintValidatorContext) {
            return todo != null
                && todo.getId() == null
                && (todo.getDescription() != null || todo.getDueDate() != null || todo.isCompleted() != null);
        }
    }
}

And now you can move validation annotations to the definition of a REST endpoint:

@POST
@Consumes(APPLICATION_JSON)
public Response create(@ValidForCreation Todo todo) {...}

@PUT
@Consumes(APPLICATION_JSON)
public Response update(@ValidForModification Todo todo) {...}

And now you can remove those NotNulls from your model.

Integration testing

There are in general two approaches to integration testing:

  • test is being run on separate JVM than the app, which is deployed on some other integration environment
  • test deploys the application programmatically in the setup block.

Both of these have their pros and cons, but for small enough servoces, I personally prefer the second approach. It’s much easier to setup and you have only one JVM started, which makes debugging really easy. You can use a generic framework like Arquillian for starting your application in a container environment, but I prefer simple solutions and just use emdedded Jetty. To make test setup 100% production equivalent, I’m creating full Jetty’s WebAppContext and have to resolve all runtime dependencies for Jersey auto-discovery to work. This can be simply achieved with Maven resolved from Shrinkwrap - an Arquillian subproject:

    WebAppContext webAppContext = new WebAppContext();
    webAppContext.setResourceBase("src/main/webapp");
    webAppContext.setContextPath("/");
    File[] mavenLibs = Maven.resolver().loadPomFromFile("pom.xml")
                .importCompileAndRuntimeDependencies()
                .resolve().withTransitivity().asFile();
    for (File file: mavenLibs) {
        webAppContext.getMetaData().addWebInfJar(new FileResource(file.toURI()));
    }
    webAppContext.getMetaData().addContainerResource(new FileResource(new File("./target/classes").toURI()));

    webAppContext.setConfigurations(new Configuration[] {
        new AnnotationConfiguration(),
        new WebXmlConfiguration(),
        new WebInfConfiguration()
    });
    server.setHandler(webAppContext);

(this Stackoverflow thread inspired me a lot here)

Now it’s time for the last part of the post: parametrizing our integration tests. Since we want to test validation constraints, there are many edge paths to check (and make your code coverage close to 100%). Writing one test per each case could be a bad idea. Among the many solutions for JUnit I’m most convinced to the Junit Params by Pragmatists team. It’s really simple and have nice concept of JQuery-like helper for creating providers. Here is my tests code (I’m also using builder pattern here to create various kinds of Todos):

@Test
@Parameters(method = "provideInvalidTodosForCreation")
public void shouldRejectInvalidTodoWhenCreate(Todo todo) {
    Response response = createTarget().request().post(Entity.json(todo));

    assertThat(response.getStatus()).isEqualTo(BAD_REQUEST.getStatusCode());
}

private static Object[] provideInvalidTodosForCreation() {
    return $(
        new TodoBuilder().withDescription("test").build(),
        new TodoBuilder().withDueDate(DateTime.now()).build(),
        new TodoBuilder().withId(123L).build(),
        new TodoBuilder().build()
    );
}

OK, enough of reading, feel free to clone the project and start writing your REST services!