Learn about Orchestrating Microservices, the procedure of coordinating and managing interactions between microservices, and the tools available for it.
Daniel McNeela
Machine Learning Engineer and DevOps Expert
Microservices are atomic by nature. Optimally written microservices each perform only a single business function, be it calculating tax on an order, logging a user in, or inserting a new customer into a database. There are many advantages to this method of structuring applications. For one, microservices can be scaled individually, each in accordance with its own demand, leading to a more efficient use of hardware resources.
Furthermore, each microservice is self-contained so that the inner mechanisms by which it works need not be known to outside application developers and users. A microservice instead need only define an API by which users can interact with it by passing or receiving data. Additionally, microservices are quite fault-tolerant as the failure of a single service can often be isolated without affecting the remainder of the application.
That said, microservices have their own set of challenges. For one, due to their unitary nature, a sequence of microservices is needed to execute a full-blown business process, and each of the individual services must interact with the others.
Microservice orchestration refers to the procedure of coordinating and managing these interactions between microservices. In this paradigm, there is usually a central “brain”, i.e. a service which orchestrates the behavior at the intersection of microservices, calling upon one or the other via a set of well-defined rules and passing data between them.
There are a variety of tools that exist to enable and facilitate microservice orchestration. Some of them are cloud provider specific, while others work with any set of containerized microservices.
Probably the most well-known tool for microservice orchestration is Kubernetes. Kubernetes works exclusively with containerized applications, meaning that in order to use it, you need to ensure that you create Docker containers, or similar, around your microservices. Kubernetes groups application containers into units called pods.
From there, they provide an API which allows you to expose services running in pods. Kubernetes makes handling the process of scaling microservices easy. It also provides useful features such as load balancing, topology-aware traffic routing, the ability to interact with storage volumes, and self-healing in response to container failure. Furthermore, many cloud platforms provide managed Kubernetes services, from Amazon’s EKS to Microsoft Azure’s AKS.
Netflix provides an open-source microservice orchestration tool called Conductor that allows microservice workflows to be scaled to millions of concurrently-running processes. It works with any group of microservices containerized with Docker, and it provides an indexed backend using ElasticSearch, although you can opt to choose your own database persistence layer and queueing backend.
Workflows are defined using JSON, and client libraries are available in Java, Python, and a variety of other common programming languages. Conductor breaks down complex processes into a series of workflows, tasks, and workers, each of which operates within an ecosystem of distributed servers.
Communication between entities is handled by REST or gRPC calls, and workflow definitions as well as running tasks (and the workers to which they’re assigned) can be easily visualized via the bundled UI. Perhaps the most important feature of Conductor is that it allows users to easily restart and monitor failed tasks, which minimizes application downtime.
Many cloud computing platforms provide their own tools for microservice orchestration. For example, AWS provides the Fargate service which allows users to automatically provision and scale compute resources for Kubernetes pods.
Similarly, AWS Step Functions provide a visual interface for users to manage serverless workflows, which can include or consist entirely of containerized microservices. Google Cloud Platform provides similar tools. They offer access to Workflows, which is a service to “orchestrate and automate Google Cloud and HTTP-based API services with serverless workflows”.
Workflows provides support for defining business logic in YAML workflow definitions or wait to respond to external events using callbacks. All of this is serverless, so that developers need not grapple with the challenges of scaling and coordinating large-scale interactions of microservices.
Microservices are key to the development of fault-tolerant and long-running business applications, particularly in cloud environments. Microservice architectures provide isolation, decomposability, and improved scalability relative to monolithic applications.
However, using the microservice style of development introduces additional challenges when it comes to orchestrating multiple services into a single application as developers need to become adept at handling load balancing, networking, microservice communication, and integration with storage backends.
Luckily, there exist many orchestration tools which help to automate this process, allowing development teams to seamlessly scale their applications to reach billions of users.
Cloud Management / Developer Tools / Developer Topics
Why Dockerize Your Digital Applications?
If you work anywhere remotely IT-adjacent, it's extremely unlikely you won't have at least some familiarity with Docker, or have heard somebody discussing “dockerizing” an application, but what does this mean, and why would it be a good thing?