The case for microservices

Monolithic applications consist of components that are all tightly coupled together and have to be developed, deployed and managed as one entity. Running a monolith typically requires a small number of powerful servers that can provide enough resources for running the application.

To deal with increasing loads on the system, one could either:

  • Scale up, by adding more CPU, memory or storage to existing machines; or
  • Scale out, by provisioning more virtual machines and running multiple copies of the application

Scaling up usually doesn’t require any changes to the app, but gets expensive very quickly and in practice hardware upgrades always have an upper limit.

Scaling out is cheaper hardware-wise but may require big changes in the application code which isn’t always possible or desirable. Moreover certain parts of an application, such as a relational database, are impossible to scale horizontally. If any part of a monolithic application isn’t scalable, the whole application becomes unscalable.

Unless, of course, one could split up the monolith somehow.

Splitting an app into micro-services

A complex monolithic application can be broken into smaller, independently deployable components called micro-services. Each micro-service runs as an independent process and communicates with other micro-services through simple, well-defined APIs. These communication can occur through synchronous protocols such as HTTP or asynchronous protocols such as RPC or AMQP.

Because each micro-service is a standalone process with its own external API, it’s possible to develop and deploy them separately. A change to one doesn’t require redeployment of others.

Scaling micro-services

In a micro-services architecture, scaling is done on a per-service basis. Services which require more resources are replicated and run as multiple processes deployed on different servers, while others run as a single application process. Services which can’t be scaled out are scaled up instead.

Deploying Micro-services

Although micro-services are deployed independently of each other, they must perform their work as a team. They need to be able to find their network addresses and talk to each other. When deploying them, someone or something needs to configure them properly to enable them to work together as a single system. As the number of services increase, this becomes tedious and error prone.

There are other problems too. It is typically harder to debug and trace execution calls in micro-services because they span multiple processes and machines.

Providing consistent environment

One of the biggest, and certainly most infuriating, problems that we have to deal with is the differences in the execution environments. To reduce the number of problems that only show up in production (or staging), it would be ideal if applications could run in the exact same environment during development and in production.