Microservices Deployment Strategies

01 October 2021 at 10:00 by ParTech Media - Post a comment

Microservices has become a household name in the world of technology. Netflix, eBay, Paypal, and many other giants have all evolved from a monolithic architecture to microservices.

Unlike microservices, monolithic architecture was a single-tier architecture consisting of tightly coupled components built as a single application in a single platform. To scale a specific service, the entire system had to be scaled, which turned out to be highly inefficient.

To overcome such issues, companies are now opting for microservices as it helps them establish isolation between their services. This enables easy unit testing of individual services and makes the application more adaptable to changes in their dependencies.

Such benefits are encouraging companies to opt for microservices architecture. To deploy a microservice application, there are a bunch of deployment strategies that one can follow. This post takes you through the most common ones.

Table of Contents

  1. Multiple services per host
  2. Service instance per host pattern
  3. Serverless deployment
  4. Verdict

Multiple services per host

This is a fairly traditional approach followed by many firms, especially those that do not have prior exposure to microservices. In this pattern, multiple service instances are run on a host that could be physical or virtual. The services run isolated and share the resources of the host machine such as the operating system, CPU, and other computation-intensive resources.

There are various ways of deploying service instances in a shared host. You can deploy each service instance as JVM(Java virtual machine) processes running on different hosts. Another way is you can deploy multiple service instances in the same JVM as web applications or shared bundles. But it is impossible to isolate each service instance as they are running in the same host.

Here are some of the advantages of this strategy -

  1. Multiple service instances share the same operating system and server, resulting in efficient resource utilization
  2. Faster deployment of a suite of services in an application.

Here are the disadvantages of this strategy -

  1. Poor isolation as multiple service instances are hosted in a single machine
  2. Dependency conflicts may arise
  3. No resource limit regarding the utilization of resources like operating systems and servers. This may result in some service instances gobbling up resources beyond the threshold resulting in a deadlock.

Service instance per host pattern

In this pattern, the service instances are run in isolation on their own host depending upon the host type(VM or containers). The service instance per host pattern constitutes two different variations -

  1. Service instance per virtual machine
  2. Service instance per container

Service instance per virtual machine

Unlike multiple services per host, in Service instance per host pattern, service instances are well isolated and packaged as a virtual machine(VM) image such as Amazon EC2 AMI. Each service instance becomes a VM(virtual machine) after running those VM images.

This approach is leveraged by Netflix to deploy its video streaming service. Each service is packaged into an Amazon Machine Image(AMI) using Aminator, which is a tool for packaging services into AMIs which are similar to VM images and then launched.

Here are the advantages of this strategy -

  1. A major advantage of running service instances as isolated VMs is that resources are allocated strictly to each service and the resource consumption is contained. A service cannot steal resources from other services.

  2. Also another benefit of using microservices as virtual machine images is that it helps you leverage mature cloud infrastructure. This helps you in load balancing and auto-scaling of requests to your services.

  3. Deploying service instances as VMs encapsulates your service implementation technology. In other words, your VM becomes a black box, which means that you need not worry about what lies beneath the hood or its internal working. Hence, deployment becomes simpler and reliable.

Here are the disadvantages of this strategy -

  1. A typical IaaS charges for VMs regardless of their state of activity being idle or active. Although AWS provides auto-scaling, it is difficult to react to changes rapidly on demand. To change the VMs, you have to overprovision your VMs, which in turn, increases the cost of deployment. Hence this approach is not very cost-effective.

  2. Generally, VMs are slower to instantiate due to their large sizes. When deploying a new version of a service, the process is slow.

  3. Handling and managing VMs can be a daunting task. Unless your firm uses some external VM managing tools, the service instance per VM pattern can distract you from your core business.

Service instance per container

In this pattern, service instances are run in containers. Unlike virtual machines that virtualize the entire computer system, containers are confined to virtualization at the operating system level. You can also control the container’s CPU and memory. To implement this pattern, you have to package your services into container images. A container image is a filesystem that contains all the dependencies and libraries needed by the service to run.

Here are the advantages of this strategy -

  1. Containers are similar to VMs in terms of isolating services. You have the feature of monitoring the resources consumed by each container.
  2. Containers are lightweight compared to VMs. Therefore, container images take less time to build and are typically faster to boot up.
  3. You can focus on your core business without spending too much time managing your microservices in containers. This is because the container management API helps you in managing your microservices running in containers.

Here are the disadvantages of this strategy -

  1. Containers are not as secure as VMs. This is because the kernel of the OS is shared among one another.
  2. Containers are often deployed in infrastructures that have per VM pricing. Similar to VMs they also require extra provisioning of resources at extra costs to handle huge spikes in load.

Serverless deployment

Serverless deployment is becoming an increasingly popular concept as it eliminates the confusion of choosing VMs or containers to deploy the microservices. This allows companies to better focus on their mainstream business instead of racking their brains on container or VM deployments.

AWS Lambda is a classic example of a serverless deployment services pattern. To deploy microservices, you need to package it as a zip file and upload it to AWS Lambda. You also attach metadata and the name of the function that is called when it has to handle a request. And AWS Lambda takes care of the rest by running enough instances of your microservices to handle multiple requests.

Here are the advantages of this strategy -

  1. Unlike the other two microservices deployment strategies, serverless deployment does not use a virtual host(virtual machines or containers), eliminating the need for any additional infrastructure.
  2. As it needs to be deployed as a zip file, it is relatively faster to build
  3. Serverless deployment infrastructure, especially AWS Lambda, uses request-based pricing. Which means you only pay for what your services have consumed.

Here are the disadvantages of this strategy -

  1. Serverless deployment is not reliable for long-running services that require messages from a third-party broker.
  2. Services must be started quickly, otherwise, they may get terminated or timed out in serverless IaaS.


Deploying microservices applications can be quite taxing without the right strategy as there are hundreds of services running that are written in different languages and frameworks. When it comes to microservices, each service is a mini-application having its own share of requirements in the form of resources, scalability, and monitoring. Serverless deployment to AWS Lambda is another intriguing pattern that is becoming more popular in many organizations embracing DevOps.