When distributed monolith may work for you

The Distributed Monolith term surely has a bad press. When you read through blogs and conference talks I’m sure you’d better build a “traditional” monolith rather that a distributed one. If you go with the latter, it means you tried to build microservices but you failed. And now your life is sad and full of pain ;) In this post I’d like to describe our case, when we built something like distributed monolith intentionally, with quite a pleasant final experience.

In my opinion the key to microservices is in the Conway’s Law. Looking from that perspective you’d think that primary use-case for microservices is large organizations with in-house IT department with many small teams, working independently, with large control over their part ot the system. But this doesn’t cover the full spectrum of software development market ;)

When you are a software house you deal with various type of projects. This includes maintenance, rewrite part of a large system and – last but not least – developing new systems from scratch. But the development process is much different than in-house IT: we use small, versatile teams (up to 10 people), not tight stick to roles like webdev, tester (developers, not programmers). There are no two identical projects for us: every customer has its own needs, tools, infrastructure, procedures, etc. (see http://ericsink.com/No_Programmers.html)

If you plan your system architecture you often try to do some checklist that can help you make the best choice. I think the most interesting question from it is:

Is there a part of the system that a single person, or small team, operates independently inside of?

The usual response for a software house is “Hmm… No”. So are we forced to always build monoliths?

The good parts of monolith

Let’s think first what could be the good parts of monolithic architecture, especially for a small team inside a software house:

Simple communication model

When you want to deliver fast, you have to reduce overhead. If your customer ask for a feature, telling him that you first have to investigate how to make your microservices talk to each other could be not acceptable. Also, having less service-to-service communication, makes monitoring and debugging in production a lot easier. It’s different when you build a large microservice stack for a big organization – you surely convinced business owners that you need some time to invest in tools and infrastructure and they have to wait for it to be done before features arrive ;)

Single repository

At software houses, we usually have a lot of freedom choosing CI/CD tooling, some team go with full-stack Gitlab, some prefer Gerrit + Jenkins, some work on customer’s infrastructure. We don’t have repeatable, complicated build pipeline for every project – usually just running tests, make some static code analysis (with Sputnik), sometimes push Docker images somewhere. It’s just easy to have just one repository, not having to setup multiple independently.

Single data source

This is also not constant across projects, but most of them need just one data source, possibly just RDBMS (Postgres) to store business data. It’s easy to administer, manage, migrate. If you want a bit less coupling between parts of the system, separate database schemas for a start could be enough.

Simple deployment infrastructure

Just like above, from deployment perspective, the simplest – the better. Your customer doesn’t want you to spend 3 weeks setting up a Kubernetes cluster. If you can go with plain Docker without dedicated orchestration – go with it. For us – AWS ECS with some home-made automation worked really fine.

The good parts of microservices

On the other hand, while building monolith may have many good parts, it can also be really painful. I think that there are numerous advantages you’ll get from splitting the whole system into few smaller parts. The key is having multiple deployment units, which really gives you benefits:

Resource separation

Bugs happen. It is obvious that your system will someday fail on production. The real question is what happens then? How your system deals with errors? If you use shared resources like database or http client connection pools, one, even not critical part of your system, may cause a full system outage. That’s why splitting your system into few smaller applications matters. You (and your customers) can probably live even if one of the minor functionalities fails.

Independent scaling

It is also good when your system can scale. It’s better when you can scale parts of your system independently. Some of features are massively used and business critical – and you want to run them in high-available mode with possibility to add instances on the fly when traffic increases. But also you have some other parts (e.g. backoffice) that is not heavily used with often zero traffic after working hours. With multiple applications instead of monolith it’s easier to achieve independent scaling.

Network separation

Usually parts of a big system are used from different places. You have some public services (e.g. mailing service), othen with both sides communication (with webhooks). On the other hand – a back-office part shouldn’t be exposed publicly, best hidden in a VPN to a customer’s office. Having these parts separated makes it easier to put them in different networks and then setup a particular policy of each part.

Separate configuration

Defining proper authentication configuration for Java (even in Spring) app could be hard. I guess that defining multiple authentication models (e.g. logged users, internal features for call-center people and public service calls) for different parts of a monolithic application could be a hell. Splitting a monolith into several deployment units, each with separate authentication configuration, may save you many hours of debugging magical untestable code provided by your framework.

Technology experiments

It’s really bad for a project if you’re stick to a legacy technology. Having thousands of lines written in 90’s Javascript framework could be really demotivating for developers. When you have few smaller deployment units, it’s much easier to play with technology experiments. You can either write the whole small part in some crazy technology (we use Elm for a small frontend piece) or try to upgrade some important library just in one place (we switched to 4.x Redux router in smaller application before upgrading the bigger one).

Taking best from two worlds

In our case, we think that we took the best from two approaches. We had small team (in peak 8 people), working on a single Git repository. We delivered 10 separate applications, all using the same datasource (Postgres). We used Kotlin + Spring Boot on backend, and React for frontend. We used Gradle modules to split the backend code into reusable parts, and Spring Boot apps just picked the needed pieces.

We packed the apps into Dockers, built with Gitlab CI pipelines and used Amazon ECS for container orchestration (with a small API gateway based on Nginx and Consul).

There was no inter-process (HTTP etc.) communication between the apps – we just used modules API for synchronous calls (mostly queries) and Quartz tasks for asynchronous commands. We also had ELK stack for logging and tracing.

To be honest, we were 90% happy. The remaining 10% of things that didn’t work well included:

  • many small modules – we had some performance problems with Intellij at the beginning with indexing all the small modules
  • cyclic dependencies – sometimes we discovered cycling dependencies between these modules; on the other hand – it helped us to remove some flaws from our design
  • tight coupling – our apps were designed by API (consumer) perspective, rather than clean DDD, bounded context thinking; but separate modules could help with this – e.g. a new delivery system was developed as separate part, with its own database schema
  • “all-at-once” deployment – and yes, having just 1 database, we often had to deploy multiple applications at once.

Having said that, I think that we managed a good balance between two worlds. We didn’t struggle with massive infrastructure and tooling, working with different repositories and data sources. We were able to go as fast as we built a single-repository monolith application. But we had some parts separated, with their own configuration/boilerplate, ready to be deployed and scaled independently.

You May Also Like

Multi module Gradle project with IDE support

This article is a short how-to about multi-module project setup with usage of the Gradle automation build tool.

Here's how Rich Seller, a StackOverflow user, describes Gradle:
Gradle promises to hit the sweet spot between Ant and Maven. It uses Ivy's approach for dependency resolution. It allows for convention over configuration but also includes Ant tasks as first class citizens. It also wisely allows you to use existing Maven/Ivy repositories.
So why would one use yet another JVM build tool such as Gradle? The answer is simple: to avoid frustration involved by Ant or Maven.

Short story

I was fooling around with some fresh proof of concept and needed a build tool. I'm pretty familiar with Maven so created project from an artifact, and opened the build file, pom.xml for further tuning.
I had been using Grails with its own build system (similar to Gradle, btw) already for some time up then, so after quite a time without Maven, I looked on the pom.xml and found it to be really repulsive.

Once again I felt clearly: XML is not for humans.

After quick googling I found Gradle. It was still in beta (0.8 version) back then, but it's configured with Groovy DSL and that's what a human likes :)

Where are we

In the time Ant can be met but among IT guerrillas, Maven is still on top and couple of others like for example Ivy conquer for the best position, Gradle smoothly went into its mature age. It's now available in 1.3 version, released at 20th of November 2012. I'm glad to recommend it to anyone looking for relief from XML configured tools, or for anyone just looking for simple, elastic and powerful build tool.

Lets build

I have already written about basic project structure so I skip this one, reminding only the basic project structure:
<project root>

├── build.gradle
└── src
├── main
│ ├── java
│ └── groovy

└── test
├── java
└── groovy
Have I just referred myself for the 1st time? Achievement unlocked! ;)

Gradle as most build tools is run from a command line with parameters. The main parameter for Gradle is a 'task name', for example we can run a command: gradle build.
There is no 'create project' task, so the directory structure has to be created by hand. This isn't a hassle though.
Java and groovy sub-folders aren't always mandatory. They depend on what compile plugin is used.

Parent project

Consider an example project 'the-app' of three modules, let say:
  1. database communication layer
  2. domain model and services layer
  3. web presentation layer
Our project directory tree will look like:
the-app

├── dao-layer
│ └── src

├── domain-model
│ └── src

├── web-frontend
│ └── src

├── build.gradle
└── settings.gradle
the-app itself has no src sub-folder as its purpose is only to contain sub-projects and build configuration. If needed it could've been provided with own src though.

To glue modules we need to fill settings.gradle file under the-app directory with a single line of content specifying module names:
include 'dao-layer', 'domain-model', 'web-frontend'
Now the gradle projects command can be executed to obtain such a result:
:projects

------------------------------------------------------------
Root project
------------------------------------------------------------

Root project 'the-app'
+--- Project ':dao-layer'
+--- Project ':domain-model'
\--- Project ':web-frontend'
...so we know that Gradle noticed the modules. However gradle build command won't run successful yet because build.gradle file is still empty.

Sub project

As in Maven we can create separate build config file per each module. Let say we starting from DAO layer.
Thus we create a new file the-app/dao-layer/build.gradle with a line of basic build info (notice the new build.gradle was created under sub-project directory):
apply plugin: 'java'
This single line of config for any of modules is enough to execute gradle build command under the-app directory with following result:
:dao-layer:compileJava
:dao-layer:processResources UP-TO-DATE
:dao-layer:classes
:dao-layer:jar
:dao-layer:assemble
:dao-layer:compileTestJava UP-TO-DATE
:dao-layer:processTestResources UP-TO-DATE
:dao-layer:testClasses UP-TO-DATE
:dao-layer:test
:dao-layer:check
:dao-layer:build

BUILD SUCCESSFUL

Total time: 3.256 secs
To use Groovy plugin slightly more configuration is needed:
apply plugin: 'groovy'

repositories {
mavenLocal()
mavenCentral()
}

dependencies {
groovy 'org.codehaus.groovy:groovy-all:2.0.5'
}
At lines 3 to 6 Maven repositories are set. At line 9 dependency with groovy library version is specified. Of course plugin as 'java', 'groovy' and many more can be mixed each other.

If we have settings.gradle file and a build.gradle file for each module, there is no need for parent the-app/build.gradle file at all. Sure that's true but we can go another, better way.

One file to rule them all

Instead of creating many build.gradle config files, one per each module, we can use only the parent's one and make it a bit more juicy. So let us move the the-app/dao-layer/build.gradle a level up to the-app/build-gradle and fill it with new statements to achieve full project configuration:
def langLevel = 1.7

allprojects {

apply plugin: 'idea'

group = 'com.tamashumi'
version = '0.1'
}

subprojects {

apply plugin: 'groovy'

sourceCompatibility = langLevel
targetCompatibility = langLevel

repositories {
mavenLocal()
mavenCentral()
}

dependencies {
groovy 'org.codehaus.groovy:groovy-all:2.0.5'
testCompile 'org.spockframework:spock-core:0.7-groovy-2.0'
}
}

project(':dao-layer') {

dependencies {
compile 'org.hibernate:hibernate-core:4.1.7.Final'
}
}

project(':domain-model') {

dependencies {
compile project(':dao-layer')
}
}

project(':web-frontend') {

apply plugin: 'war'

dependencies {
compile project(':domain-model')
compile 'org.springframework:spring-webmvc:3.1.2.RELEASE'
}
}

idea {
project {
jdkName = langLevel
languageLevel = langLevel
}
}
At the beginning simple variable langLevel is declared. It's worth knowing that we can use almost any Groovy code inside build.gradle file, statements like for example if conditions, for/while loops, closures, switch-case, etc... Quite an advantage over inflexible XML, isn't it?

Next the allProjects block. Any configuration placed in it will influence - what a surprise - all projects, so the parent itself and sub-projects (modules). Inside of the block we have the IDE (Intellij Idea) plugin applied which I wrote more about in previous article (look under "IDE Integration" heading). Enough to say that with this plugin applied here, command gradle idea will generate Idea's project files with modules structure and dependencies. This works really well and plugins for other IDEs are available too.
Remaining two lines at this block define group and version for the project, similar as this is done by Maven.

After that subProjects block appears. It's related to all modules but not the parent project. So here the Groovy language plugin is applied, as all modules are assumed to be written in Groovy.
Below source and target language level are set.
After that come references to standard Maven repositories.
At the end of the block dependencies to groovy version and test library - Spock framework.

Following blocks, project(':module-name'), are responsible for each module configuration. They may be omitted unless allProjects or subProjects configure what's necessary for a specific module. In the example per module configuration goes as follow:
  • Dao-layer module has dependency to an ORM library - Hibernate
  • Domain-model module relies on dao-layer as a dependency. Keyword project is used here again for a reference to other module.
  • Web-frontend applies 'war' plugin which build this module into java web archive. Besides it referes to domain-model module and also use Spring MVC framework dependency.

At the end in idea block is basic info for IDE plugin. Those are parameters corresponding to the Idea's project general settings visible on the following screen shot.


jdkName should match the IDE's SDK name otherwise it has to be set manually under IDE on each Idea's project files (re)generation with gradle idea command.

Is that it?

In the matter of simplicity - yes. That's enough to automate modular application build with custom configuration per module. Not a rocket science, huh? Think about Maven's XML. It would take more effort to setup the same and still achieve less expressible configuration quite far from user-friendly.

Check the online user guide for a lot of configuration possibilities or better download Gradle and see the sample projects.
As a tasty bait take a look for this short choice of available plugins:
  • java
  • groovy
  • scala
  • cpp
  • eclipse
  • netbeans
  • ida
  • maven
  • osgi
  • war
  • ear
  • sonar
  • project-report
  • signing
and more, 3rd party plugins...