When distributed monolith may work for you

The Distributed Monolith term surely has a bad press. When you read through blogs and conference talks I’m sure you’d better build a “traditional” monolith rather that a distributed one. If you go with the latter, it means you tried to build microservices but you failed. And now your life is sad and full of pain ;) In this post I’d like to describe our case, when we built something like distributed monolith intentionally, with quite a pleasant final experience.

In my opinion the key to microservices is in the Conway’s Law. Looking from that perspective you’d think that primary use-case for microservices is large organizations with in-house IT department with many small teams, working independently, with large control over their part ot the system. But this doesn’t cover the full spectrum of software development market ;)

When you are a software house you deal with various type of projects. This includes maintenance, rewrite part of a large system and – last but not least – developing new systems from scratch. But the development process is much different than in-house IT: we use small, versatile teams (up to 10 people), not tight stick to roles like webdev, tester (developers, not programmers). There are no two identical projects for us: every customer has its own needs, tools, infrastructure, procedures, etc. (see http://ericsink.com/No_Programmers.html)

If you plan your system architecture you often try to do some checklist that can help you make the best choice. I think the most interesting question from it is:

Is there a part of the system that a single person, or small team, operates independently inside of?

The usual response for a software house is “Hmm… No”. So are we forced to always build monoliths?

The good parts of monolith

Let’s think first what could be the good parts of monolithic architecture, especially for a small team inside a software house:

Simple communication model

When you want to deliver fast, you have to reduce overhead. If your customer ask for a feature, telling him that you first have to investigate how to make your microservices talk to each other could be not acceptable. Also, having less service-to-service communication, makes monitoring and debugging in production a lot easier. It’s different when you build a large microservice stack for a big organization – you surely convinced business owners that you need some time to invest in tools and infrastructure and they have to wait for it to be done before features arrive ;)

Single repository

At software houses, we usually have a lot of freedom choosing CI/CD tooling, some team go with full-stack Gitlab, some prefer Gerrit + Jenkins, some work on customer’s infrastructure. We don’t have repeatable, complicated build pipeline for every project – usually just running tests, make some static code analysis (with Sputnik), sometimes push Docker images somewhere. It’s just easy to have just one repository, not having to setup multiple independently.

Single data source

This is also not constant across projects, but most of them need just one data source, possibly just RDBMS (Postgres) to store business data. It’s easy to administer, manage, migrate. If you want a bit less coupling between parts of the system, separate database schemas for a start could be enough.

Simple deployment infrastructure

Just like above, from deployment perspective, the simplest – the better. Your customer doesn’t want you to spend 3 weeks setting up a Kubernetes cluster. If you can go with plain Docker without dedicated orchestration – go with it. For us – AWS ECS with some home-made automation worked really fine.

The good parts of microservices

On the other hand, while building monolith may have many good parts, it can also be really painful. I think that there are numerous advantages you’ll get from splitting the whole system into few smaller parts. The key is having multiple deployment units, which really gives you benefits:

Resource separation

Bugs happen. It is obvious that your system will someday fail on production. The real question is what happens then? How your system deals with errors? If you use shared resources like database or http client connection pools, one, even not critical part of your system, may cause a full system outage. That’s why splitting your system into few smaller applications matters. You (and your customers) can probably live even if one of the minor functionalities fails.

Independent scaling

It is also good when your system can scale. It’s better when you can scale parts of your system independently. Some of features are massively used and business critical – and you want to run them in high-available mode with possibility to add instances on the fly when traffic increases. But also you have some other parts (e.g. backoffice) that is not heavily used with often zero traffic after working hours. With multiple applications instead of monolith it’s easier to achieve independent scaling.

Network separation

Usually parts of a big system are used from different places. You have some public services (e.g. mailing service), othen with both sides communication (with webhooks). On the other hand – a back-office part shouldn’t be exposed publicly, best hidden in a VPN to a customer’s office. Having these parts separated makes it easier to put them in different networks and then setup a particular policy of each part.

Separate configuration

Defining proper authentication configuration for Java (even in Spring) app could be hard. I guess that defining multiple authentication models (e.g. logged users, internal features for call-center people and public service calls) for different parts of a monolithic application could be a hell. Splitting a monolith into several deployment units, each with separate authentication configuration, may save you many hours of debugging magical untestable code provided by your framework.

Technology experiments

It’s really bad for a project if you’re stick to a legacy technology. Having thousands of lines written in 90’s Javascript framework could be really demotivating for developers. When you have few smaller deployment units, it’s much easier to play with technology experiments. You can either write the whole small part in some crazy technology (we use Elm for a small frontend piece) or try to upgrade some important library just in one place (we switched to 4.x Redux router in smaller application before upgrading the bigger one).

Taking best from two worlds

In our case, we think that we took the best from two approaches. We had small team (in peak 8 people), working on a single Git repository. We delivered 10 separate applications, all using the same datasource (Postgres). We used Kotlin + Spring Boot on backend, and React for frontend. We used Gradle modules to split the backend code into reusable parts, and Spring Boot apps just picked the needed pieces.

We packed the apps into Dockers, built with Gitlab CI pipelines and used Amazon ECS for container orchestration (with a small API gateway based on Nginx and Consul).

There was no inter-process (HTTP etc.) communication between the apps – we just used modules API for synchronous calls (mostly queries) and Quartz tasks for asynchronous commands. We also had ELK stack for logging and tracing.

To be honest, we were 90% happy. The remaining 10% of things that didn’t work well included:

  • many small modules – we had some performance problems with Intellij at the beginning with indexing all the small modules
  • cyclic dependencies – sometimes we discovered cycling dependencies between these modules; on the other hand – it helped us to remove some flaws from our design
  • tight coupling – our apps were designed by API (consumer) perspective, rather than clean DDD, bounded context thinking; but separate modules could help with this – e.g. a new delivery system was developed as separate part, with its own database schema
  • “all-at-once” deployment – and yes, having just 1 database, we often had to deploy multiple applications at once.

Having said that, I think that we managed a good balance between two worlds. We didn’t struggle with massive infrastructure and tooling, working with different repositories and data sources. We were able to go as fast as we built a single-repository monolith application. But we had some parts separated, with their own configuration/boilerplate, ready to be deployed and scaled independently.

You May Also Like

mount.ntfs high cpu ubuntu

My computer suffers from sudden and continous hard drive load strokes. Sometimes it lasts for a few minutes and hence work is impossible because everything goes very slow.I'm trying to locate the cause because it makes me nervous :)Today I found one of...

Clojure web development – state of the art

It’s now more than a year that I’m getting familiar with Clojure and the more I dive into it, the more it becomes the language. Once you defeat the “parentheses fear”, everything else just makes the difference: tooling, community, good engineering practices. So it’s now time for me to convince others. In this post I’ll try to walktrough a simple web application from scratch to show key tools and libraries used to develop with Clojure in late 2015.

Note for Clojurians: This material is rather elementary and may be useful for you if you already know Clojure a bit but never did anything bigger than hello world application.

Note for Java developers: This material shows how to replace Spring, Angular, grunt, live-reload with a bunch of Clojure tools and libraries and a bit of code.

The repo with final code and individual steps is here.

Bootstrap

I think all agreed that component is the industry standard for managing lifecycle of Clojure applications. If you are a Java developer you may think of it as a Spring (DI) replacement - you declare dependencies between “components” which are resolved on “system” startup. So you just say “my component needs a repository/database pool” and component library “injects” it for you.

To keep things simple I like to start with duct web app template. It’s a nice starter component application following the 12-factor philosophy. So let’s start with it:

lein new duct clojure-web-app +example

The +example parameter tells duct to create an example endpoint with HTTP routes - this would be helpful. To finish bootstraping run lein setup inside clojure-web-app directory.

Ok, let’s dive into the code. Component and injection related code should be in system.clj file:

(defn new-system [config]
  (let [config (meta-merge base-config config)]
    (-> (component/system-map
         :app  (handler-component (:app config))
         :http (jetty-server (:http config))
         :example (endpoint-component example-endpoint))
        (component/system-using
         {:http [:app]
          :app  [:example]
          :example []}))))

In the first section you instantiate components without dependencies, which are resolved in the second section. So in this example, “http” component (server) requires “app” (application abstraction), which in turn is injected with “example” (actual routes). If your component needs others, you just can get then by names (precisely: by Clojure keywords).

To start the system you must fire a REPL - interactive environment running within context of your application:

lein repl

After seeing prompt type (go). Application should start, you can visit http://localhost:3000 to see some example page.

A huge benefit of using component approach is that you get fully reloadable application. When you change literally anything - configuration, endpoints, implementation, you can just type (reset) in REPL and your application is up-to-date with the code. It’s a feature of the language, no JRebel, Spring-reloaded needed.

Adding REST endpoint

Ok, in the next step let’s add some basic REST endpoint returning JSON. We need to add 2 dependencies in project.clj file:

:dependencies
 ...
  [ring/ring-json "0.3.1"]
  [cheshire "5.1.1"]

Ring-json adds support for JSON for your routes (in ring it’s called middleware) and cheshire is Clojure JSON parser (like Jackson in Java). Modifying project dependencies if one of the few tasks that require restarting the REPL, so hit CTRL-C and type lein repl again.

To configure JSON middleware we have to add wrap-json-body and wrap-json-response just before wrap-defaults in system.clj:

(:require 
 ...
 [ring.middleware.json :refer [wrap-json-body wrap-json-response]])

(def base-config
   {:app {:middleware [[wrap-not-found :not-found]
                      [wrap-json-body {:keywords? true}]
                      [wrap-json-response]
                      [wrap-defaults :defaults]]

And finally, in endpoint/example.clj we must add some route with JSON response:

(:require 
 ...
 [ring.util.response :refer [response]]))

(defn example-endpoint [config]
  (routes
    (GET "/hello" [] (response {:hello "world"}))
    ...

Reload app with (reset) in REPL and test new route with curl:

curl -v http://localhost:3000/hello

< HTTP/1.1 200 OK
< Date: Tue, 15 Sep 2015 21:17:37 GMT
< Content-Type: application/json; charset=utf-8
< Set-Cookie: ring-session=37c337fb-6bbc-4e65-a060-1997718d03e0;Path=/;HttpOnly
< X-XSS-Protection: 1; mode=block
< X-Frame-Options: SAMEORIGIN
< X-Content-Type-Options: nosniff
< Content-Length: 151
* Server Jetty(9.2.10.v20150310) is not blacklisted
< Server: Jetty(9.2.10.v20150310)
<
* Connection #0 to host localhost left intact
{"hello": "world"}

It works! In case of any problems you can find working version in this commit.

Adding frontend with figwheel

Coding backend in Clojure is great, but what about the frontend? As you may already know, Clojure could be compiled not only to JVM bytecode, but also to Javascript. This may sound familiar if you used e.g. Coffescript. But ClojureScript philosophy is not only to provide some syntax sugar, but improve your development cycle with great tooling and fully interactive development. Let’s see how to achieve it.

The best way to introduce ClojureScript to a project is figweel. First let’s add fighweel plugin and configuration to project.clj:

:plugins
   ...
   [lein-figwheel "0.3.9"]

And cljsbuild configuration:

:cljsbuild
    {:builds [{:id "dev"
               :source-paths ["src-cljs"]
               :figwheel true
               :compiler {:main       "clojure-web-app.core"
                          :asset-path "js/out"
                          :output-to  "resources/public/js/clojure-web-app.js"
                          :output-dir "resources/public/js/out"}}]}

In short this tells ClojureScript compiler to take sources from src-cljs with figweel support and but resulting JavaScript into resources/public/js/clojure-web-app.js file. So we need to include this file in a simple HTML page:

<!DOCTYPE html>
<head>
</head>
<body>
  <div id="main">
  </div>
  <script src="js/clojure-web-app.js" type="text/javascript"></script>
</body>
</html>

To serve this static file we need to change some defaults and add corresponding route. In system.clj change api-defaults to site-defaults both in require section and base-config function. In example.clj add following route:

(GET "/" [] (io/resource "public/index.html")

Again (reset) in REPL window should reload everything.

But where is our ClojureScript source file? Let’s create file core.cljs in src-cljs/clojure-web-app directory:

(ns ^:figwheel-always clojure-web-app.core)

(enable-console-print!)

(println "hello from clojurescript")

Open another terminal and run lein fighweel. It should compile ClojureScript and print ‘Prompt will show when figwheel connects to your application’. Open http://localhost:3000. Fighweel window should prompt:

To quit, type: :cljs/quit
cljs.user=>

Type (js/alert "hello"). Boom! If everything worked you should see and alert in your browser. Open developers console in your browser. You should see hello from clojurescript printed on the console. Change it in core.cljs to (println "fighweel rocks") and save the file. Without reloading the page your should see updated message. Figweel rocks! Again, in case of any problems, refer to this commit.

In the next post I’ll show how to fetch data from MongoDB, serve it with REST to the broser and write ReactJs/Om components to render it. Stay tuned!