Formatting Java Time with Spring Boot using JSON

stf0-banner The aim of this post is to summarize and review ways of formatting Java Time objects using Spring Boot and Jackson library.

This post is organized into five steps. Each step represents one aspect of the issue and it is also related to one commit in the example project repository.

Step 0 – Prerequirements

Versions and dependencies

This tutorial is based on Spring Boot version 1.3.1.RELEASE with spring-boot-starter-web. It uses jackson-datatype-jsr310 from com.fasterxml.jackson.datatype in version 2.6.4, which is a default version of Spring Boot. All of these is based on Java 8.

The Code

In the example code repository, you can find one HTTP service made with Spring Boot. This service is a GET operation, which returns a class with Java Time objects. You can also find the integration test that deserializes the response.

Step 1 – The goal

I would like to return class Clock, containing LocalDate,LocalTime and LocalDateTime, preinitialized in constructor.

public final class Clock {
    private final LocalDate localDate;
    private final LocalTime localTime;
    private final LocalDateTime localDateTime;
    ...
}

Response class is serialized to JSON Map, which is a default behaviour. To some extent it is correct, but ISO-formatted Strings in response are preferable.

{  
    "localDate":{  
        "year":2016,
        "month":"JANUARY",
        "era":"CE",
        "dayOfYear":1,
        "dayOfWeek":"FRIDAY",
        "leapYear":true,
        "dayOfMonth":1,
        "monthValue":1,
        "chronology":{  
            "id":"ISO",
            "calendarType":"iso8601"
        }
    }
}

Integration testing is an appropriate way to test our functionality.

ResponseEntity resp = sut.getForEntity("http://localhost:8080/clock", Clock.class);

assertEquals(OK, resp.getStatusCode());
assertEquals(c.getLocalDate(), resp.getBody().getLocalDate());
assertEquals(c.getLocalTime(), resp.getBody().getLocalTime());
assertEquals(c.getLocalDateTime(), resp.getBody().getLocalDateTime());

Unfortunately, tests are not passing, because of deserialization problems. The exception with message is thrown can not instantiate from JSON object.

Step 2 – Adds serialization

First things first. We have to add JSR-310 module. It is a datatype module to make Jackson recognize Java 8 Date & Time API data types.

Note that in this example jackson-datatype-jsr310 version is inherited from spring-boot-dependencies dependency management.

com.fasterxml.jackson.datatype
      jackson-datatype-jsr310

Response is now consistent but still, not perfect. Dates are serialized as numbers:

{  
    "version":2,
    "localDate":[  
        2016,
        1,
        1
    ],
    "localTime":[  
        10,
        24
    ],
    "localDateTime":[  
        2016,
        1,
        1,
        10,
        24
    ],
    "zonedDateTime":1451640240.000000000
}

We are one step closer to our goal. Tests are passing now because this format can be deserialized without any additional deserializers. How do I know? Start an application server on commit Step 2 - Adds Object Mapper, then checkout to Step 1 - Introduce types and problems, and run integration tests without @WebIntegrationTest annotation.

Step 3 – Enables ISO formatting

ISO 8601 formatting is a standard. I’ve found it in many projects. We are going to enable and use it. Edit spring boot properties file application.properties and add the following line:

spring.jackson.serialization.WRITE_DATES_AS_TIMESTAMPS = false

Now, the response is something that I’ve expected:

{  
    "version":2,
    "localDate":"2016-01-01",
    "localTime":"10:24",
    "localDateTime":"2016-01-01T10:24",
    "zonedDateTime":"2016-01-01T10:24:00+01:00"
}

Step 4 – Adds on-demand formatting pattern

Imagine one of your client systems does not have the capability of formatting time. It may be a primitive device or microservice that treats this date as a collection of characters. That is why special formatting is required.

We can change formatting in response class by adding JsonFormat annotation with pattern parameter. Standard SimpleDateFormat rules apply.

@JsonFormat(pattern = "dd::MM::yyyy")
private final LocalDate localDate;
    
@JsonFormat(pattern = "KK:mm a")
private final LocalTime localTime;

Below there is a service response using custom @JsonFormat pattern:

{  
    "version":2,
    "localDate":"01::01::2016",
    "localTime":"10:24 AM",
    "localDateTime":"2016-01-01T10:24",
    "zonedDateTime":"2016-01-01T10:24:00+01:00"
}

Our tests are still passing. It means that this pattern is used for serialization in service and deserialization in tests.

Step 5 – Globally changes formatting

There are situations where you have to resign from ISO 8601 formatting in your whole application, and apply custom-made standards.

In this part, we will redefine the format pattern for LocalDate. This will change formatting of LocalDate in every endpoint of your API.

We have to define: – DateTimeFormatter with our pattern. – Serializer using defined pattern. – Deserializer using defined pattern. – ObjectMapper bean with custom serializer and deserializer. – RestTemplate that uses our ObjectMapper.

Bean ObjectMapper is defined with annotation @Primary, to override default configuration. My custom pattern for LocalDate is dd::MM::yyyy

public static final DateTimeFormatter FORMATTER = ofPattern("dd::MM::yyyy");
    
@Bean
@Primary
public ObjectMapper serializingObjectMapper() {
    ObjectMapper objectMapper = new ObjectMapper();
    JavaTimeModule javaTimeModule = new JavaTimeModule();
    javaTimeModule.addSerializer(LocalDate.class, new LocalDateSerializer());
    javaTimeModule.addDeserializer(LocalDate.class, new LocalDateDeserializer());
    objectMapper.registerModule(javaTimeModule);
    return objectMapper;
}

Definitions of serializer and deserializer for all LocalDate classes:

public class LocalDateSerializer extends JsonSerializer {
    
    @Override
    public void serialize(LocalDate value, JsonGenerator gen, SerializerProvider serializers) throws IOException {
        gen.writeString(value.format(FORMATTER));
    }
}
    
public class LocalDateDeserializer extends JsonDeserializer {
    
    @Override
    public LocalDate deserialize(JsonParser p, DeserializationContext ctxt) throws IOException {
        return LocalDate.parse(p.getValueAsString(), FORMATTER);
    }
}

Now, the response is formatted with our custom pattern:

{  
    "localDate":"01::01::2016"
}

Tests

When we define a custom serializer, our tests start to fail. It is because RestTemplate knows nothing about our deserializer. We have to create a custom RestTemplateFactory that creates RestTemplate with object mapper containing our deserializer.

@Configuration
public class RestTemplateFactory {
    
    @Autowired
    private ObjectMapper objectMapper;
    
    @Bean
    public RestTemplate createRestTemplate() {
        RestTemplate restTemplate = new RestTemplate();
        List converters = new ArrayList();
        MappingJackson2HttpMessageConverter jsonConverter = new MappingJackson2HttpMessageConverter();
        jsonConverter.setObjectMapper(objectMapper);
        converters.add(jsonConverter);
        restTemplate.setMessageConverters(converters);
        return restTemplate;
    }
}

Conclusion

Custom formatting Dates is relatively simple, but you have to know how to set up it. Luckily, Jackson works smoothly with Spring. If you know other ways of solving this problem or you have other observations, please comment or let me know.

Blog from Michał Lewandowski personal blog. Photo Credit.

You May Also Like

Recently at storm-users

I've been reading through storm-users Google Group recently. This resolution was heavily inspired by Adam Kawa's post "Football zero, Apache Pig hero". Since I've encountered a lot of insightful and very interesting information I've decided to describe some of those in this post.

  • nimbus will work in HA mode - There's a pull request open for it already... but some recent work (distributing topology files via Bittorrent) will greatly simplify the implementation. Once the Bittorrent work is done we'll look at reworking the HA pull request. (storm’s pull request)

  • pig on storm - Pig on Trident would be a cool and welcome project. Join and groupBy have very clear semantics there, as those concepts exist directly in Trident. The extensions needed to Pig are the concept of incremental, persistent state across batches (mirroring those concepts in Trident). You can read a complete proposal.

  • implementing topologies in pure python with petrel looks like this:

class Bolt(storm.BasicBolt):
    def initialize(self, conf, context):
       ''' This method executed only once '''
        storm.log('initializing bolt')

    def process(self, tup):
       ''' This method executed every time a new tuple arrived '''       
       msg = tup.values[0]
       storm.log('Got tuple %s' %msg)

if __name__ == "__main__":
    Bolt().run()
  • Fliptop is happy with storm - see their presentation here

  • topology metrics in 0.9.0: The new metrics feature allows you to collect arbitrarily custom metrics over fixed windows. Those metrics are exported to a metrics stream that you can consume by implementing IMetricsConsumer and configure with Config.java#L473. Use TopologyContext#registerMetric to register new metrics.

  • storm vs flume - some users' point of view: I use Storm and Flume and find that they are better at different things - it really depends on your use case as to which one is better suited. First and foremost, they were originally designed to do different things: Flume is a reliable service for collecting, aggregating, and moving large amounts of data from source to destination (e.g. log data from many web servers to HDFS). Storm is more for real-time computation (e.g. streaming analytics) where you analyse data in flight and don't necessarily land it anywhere. Having said that, Storm is also fault-tolerant and can write to external data stores (e.g. HBase) and you can do real-time computation in Flume (using interceptors)

That's all for this day - however, I'll keep on reading through storm-users, so watch this space for more info on storm development.

I've been reading through storm-users Google Group recently. This resolution was heavily inspired by Adam Kawa's post "Football zero, Apache Pig hero". Since I've encountered a lot of insightful and very interesting information I've decided to describe some of those in this post.

  • nimbus will work in HA mode - There's a pull request open for it already... but some recent work (distributing topology files via Bittorrent) will greatly simplify the implementation. Once the Bittorrent work is done we'll look at reworking the HA pull request. (storm’s pull request)

  • pig on storm - Pig on Trident would be a cool and welcome project. Join and groupBy have very clear semantics there, as those concepts exist directly in Trident. The extensions needed to Pig are the concept of incremental, persistent state across batches (mirroring those concepts in Trident). You can read a complete proposal.

  • implementing topologies in pure python with petrel looks like this:

class Bolt(storm.BasicBolt):
    def initialize(self, conf, context):
       ''' This method executed only once '''
        storm.log('initializing bolt')

    def process(self, tup):
       ''' This method executed every time a new tuple arrived '''       
       msg = tup.values[0]
       storm.log('Got tuple %s' %msg)

if __name__ == "__main__":
    Bolt().run()
  • Fliptop is happy with storm - see their presentation here

  • topology metrics in 0.9.0: The new metrics feature allows you to collect arbitrarily custom metrics over fixed windows. Those metrics are exported to a metrics stream that you can consume by implementing IMetricsConsumer and configure with Config.java#L473. Use TopologyContext#registerMetric to register new metrics.

  • storm vs flume - some users' point of view: I use Storm and Flume and find that they are better at different things - it really depends on your use case as to which one is better suited. First and foremost, they were originally designed to do different things: Flume is a reliable service for collecting, aggregating, and moving large amounts of data from source to destination (e.g. log data from many web servers to HDFS). Storm is more for real-time computation (e.g. streaming analytics) where you analyse data in flight and don't necessarily land it anywhere. Having said that, Storm is also fault-tolerant and can write to external data stores (e.g. HBase) and you can do real-time computation in Flume (using interceptors)

That's all for this day - however, I'll keep on reading through storm-users, so watch this space for more info on storm development.