Microservices Architecture: An Engineer's Complete Guide

Jun 23, 2021

When microservices architectures are mentioned, the image that is conjured is of an extremely complex system of CI/CD pipelines linked together and tended to by a mix of people and software. This isn't necessarily true. At least, the "extremely complex" part isn't true. The interconnected CI/CD pipeline part is definitely true. 

Microservices architectures—like most things in software development—have their strengths and weaknesses. In this post, we'll take a look at some important concepts when working with microservices. We just need to know enough about this architectural pattern so that we can make an informed choice about when to use it. 


Streamline your software delivery with Plutora!

Streamline your software delivery with Plutora!

Imagine a single dashboard managing all enterprise software delivery, boosting visibility, efficiency, and cutting costs. Experience Plutora's solutions today!

Imagine a single dashboard managing all enterprise software delivery, boosting visibility, efficiency, and cutting costs. Experience Plutora's solutions today!

What Is a Microservices Architecture?

A microservices architecture is an architectural pattern that structures an application as a collection of loosely coupled services. These services are usually organized according to the business domain, and each service would generally be owned by one cross-functional team. The best way I can describe it is by comparing it with another popular architectural pattern: the monolithic architecture pattern.


The key distinction of a microservices-based architecture is that it is based around loosely coupled services. For extremely large systems with a complex business domain, microservices-based architectures can reduce the complexity of the system. You can reduce domain complexity by breaking down the application into smaller services that communicate with one another via a well-defined interface. Contrast this to a monolithic architecture, which encapsulates the entire application inside a unified structure. 

In a microservices-based architecture, you can scale individual services independently of one another. This means that you can focus resources in the area that your application needs rather than having to scale your entire application horizontally. This also means that you can deploy individual services one at a time, greatly simplifying your deployment process. Let's look at the benefits and drawbacks of using a microservices-based architecture over other architectural patterns. 

The Good, the Bad, and the Ugly

Now that we're starting to understand microservices architectures, let's find out more about when using one is and isn't a good fit for your organization. 

The Good

  • A microservices architecture effectively splits up the business domain results in smaller business functionality that's easier to understand.

  • Functionality that's easier to understand can be more easily modeled as a standalone service.

  • Standalone services can use well-defined public interfaces to communicate with other loosely coupled services.

  • Loosely coupled services mean that you're less likely to have knock-on effects due to changes to a service.

  • A more reliable system means that you can deploy and test each service independently without a lot of worries.

The Bad

  • Monolithic systems can actually be faster. Monoliths depend on internal system calls, while microservices depend on remote procedure calls (RPCs), which take longer.

  • Requires a shift in the IT culture of your company. Managing many services manually isn't a viable strategy. Automation in both quality assurance and deploying using CI/CD pipelines is a must-have.

  • Existing teams may need to skill up to learn about the new architectural pattern and the constellation of tools around it.

The Ugly

  • A badly designed system can sink an application because it won't address root causes such as bad code. Add in the distributed nature of such systems, and it would be akin to throwing gasoline on a fire.

Diving Into Microservices Design Considerations

All right, the picture is getting a little clearer. Now, let's find out about the things we should focus on when designing a well-thought-out microservices-based application. 

Use Functional Decomposition to Achieve Loose Coupling and High Cohesion

In software design, loose coupling and high cohesion are desired traits. In the book The Art of Scalability, authors Martin Abbott and Michael Fisher introduce the concept of a scale cube


Scaling along the y-axis involves splitting the application into components using functional or resource-oriented boundaries. In the context of microservices, this means splitting a large application into smaller services that function independently and have enough utility on their own. Once you do this, you can deploy and scale these services independently of all other services in the system. 

Regardless of how disciplined you are, there's always the temptation to—inadvertently or otherwise—create tight coupling between components. Splitting the application into individual services has the added benefit of forcing you to be stricter about developing against an interface. Because of this, you can update internal components of the service without repercussions rippling through your application. 

Organize Your Services Around Value Streams

We already know that we need to break up our application into services, but how do we do that? A recommended practice is to use business capability as the boundary. This means organizing your services around the different value streams provided by your company. 

To do this effectively, you need deep knowledge of the business domain so that you can match services to capabilities provided by the business. If you're already using a platform like Plutora to manage your value streams, identifying and mapping services to these value streams becomes easier. You can also find out more about value stream mapping and how to apply it on the Plutora blog. 

Let's look at the example of a company that provides audio-visual hardware rental and technical crew hiring. We could come up with the following subset of value streams: 

  • Crew planning

  • Equipment planning

  • Vehicle planning

  • Equipment availability tracking

  • Project planning

  • Warehouse management

Once you've identified your services, you can hand these over to cross-functional teams that can take over ownership. Individual teams could own multiple services that have cohesion with one another and require overlapping domain knowledge. 

The advantage of using a cross-functional team is that every member of the team will eventually become an expert in this domain. Also, the team that owns the service will be able to make decisions about it across the stack, without having to consult externally with someone who doesn't have the same level of domain knowledge. 

Minimize Choke Points

Once you start splitting up your services, you need to identify other choke points in your application and make sure that they don't become a single point of failure. For example, it's great to have all your services independently scaled and separated, but not if it's all connected to the same database or DB table. That introduces a single point of failure because if one service causes your DB to slow down, then all your services will be affected. 

Decentralize Everything

Once you start adopting the microservices architecture, you'll find that another benefit is being able to decentralize decision-making. The technologies used in building your monolith will no longer constrain teams from building new services because they function independently. Each service is standalone, and as such, the team managing that service can make its own decision about the best technology stack to be used for the job. This will need to be balanced against organizational goals because the question of "Can you use X tech?" isn't the same question as "Should you use X tech?" But it's always comforting to have the option to choose the best tool for the job. 

In addition to the decentralization of the technology stack, the way components are built and deployed within your application will also change. Teams may prefer to evolve and update the standards to better fit the organizational goals they are pursuing. They could then, in turn, redistribute this knowledge to other teams solving the same class of problems. I recommend adopting API design guidelines or using open-source tools like Swagger to make sure that teams don't waste time on problems that are already solved. 

Examining the Operational Aspect of Microservices Architectures

Now, let's have a look at some of the operational considerations of deploying and maintaining an application based on the microservices architecture. 

Automate Testing and Deployment

If your organization is going down the path of choosing the microservices architecture, you should be comfortable with automation. Automation includes aspects of automated testing as well as automated deployment. There are a lot of moving parts to a microservices architecture, and handling all these components manually may not be tenable. 

To start off with, your organization should have some form of automated testing before it integrates code changes into the main branch. These tests could take the form of unit tests, functional tests, API tests, or some other type. Any changes made to a service should trigger these tests in your integration environment. Once the automated and manual checking process accepts these changes, you can deploy them to your testing environment. Once they pass a final review on the testing environment, you can then deploy the changes to production. 

To set up this automated pipeline of triggers and actions, you need a continuous integration/continuous deployment (CI/CD) tool. There are plenty of paid and free options available; here's a good overview of the space as it stands now. 


Have a Strategy for Versioning and Release

Once you create your deployment pipeline, your CI/CD software will take care of the manual tasks surrounding the shipping of software. But what kind of approach should you take for deployment and versioning? 

Deployment

If all of your services use the same environment (OS), you can always deploy multiple services to the same environment. However, the preferred approach is to separate each microservice into its own environment. This also minimizes edge cases where one service is interfering with another. Additionally, container services like Docker and virtualization services like Hyper-V make it much easier to package a microservice. Once you've packaged a microservice, you can then deploy and manage them completely independent of one another. 

Services such as Amazon ECS/EKS, Kubernetes, and Docker Swarm are what are known as container orchestrator services. Container orchestrator services take your packaged application and make sure it's running correctly so that your users are happy. They also take care of things like scaling (x-axis scaling of your microservice) your service up or down and health checks. Modern CI/CD solutions allow you to integrate the orchestration software right into the CI/CD pipeline. 

Versioning

As we mentioned previously, an advantage of the microservices pattern is developing against an interface. But what if that interface keeps changing because of updates? When you deploy a microservice for use by others, you're making a promise that the service will work until it's deprecated. If you change the API methods or the parameters in use with every deployment, then you're making it hard for people to use your service. That completely defeats the purpose of using this architecture style. 

What's the best way to deal with changing requirements? One approach is to version any changes to your interface and have multiple versions of your API hosted. Maintaining an API life cycle policy will help you determine how to introduce new versions. It can also help you with how many versions you will maintain and how you will deprecate old versions. Using something like semantic versioning will give further context to the API versions you deploy. 

Another approach is to adopt something like consumer-driven contracts. In consumer-driven contracts, each service that consumes a public-facing API captures the expectations of the provider API in a separate contract. These contracts can then be made available to the provider API so it knows what services are depending on it. This will, in turn, provide insight into the obligations it must fulfill for each consumer API. By integrating this process into your build service, you can be automatically notified of breaking changes. 

Release Management

Release management is a difficult enough topic without also having to handle it across multiple services and business domains. Delivering high-quality software on time and under budget requires versatile tools that can take the pain and complexity out of the process. If you have an application that is critical to the functioning of your business, platforms such as Plutora can help you manage releases and make sure you're delivering value to your customer

Account for Failure

In simple terms, systems crash, and the best-laid plans will fail. Accounting for failure is a requirement in software design, but more so with microservices architectures. This is primarily because this type of architecture depends on the correct functioning of many smaller pieces of software. As the number of functioning components and their connections increases, so does the chance of failure of one of those connections. But, luckily, there are certain techniques that you can use to help ward off the worst of the fallout. 

Use the Circuit Breaker Pattern

The circuit breaker pattern is a software pattern that introduces a proxy when making an external service call. The proxy will make the service call and apply a timeout to the operation. If the operation completes in time, all is well. If the call is taking too long because of some delay, the timeout will trigger and the operation will return as failed. At that point, you can either retry or bubble that error up the call stack. 

This software pattern prevents a single remote call from holding up a whole chain of external calls by functioning as a circuit breaker on the errant remote service. 


As seen in the above image, the green service has its remote service call to purple interrupted because it goes over the 80ms timeout. The long wait time might be because the green service is unhealthy and failing. Green can either retry the request and hope it is able to connect to the second purple instance, or it can bubble up the error. 

Simulate Failure

Regardless of the software solutions you put in place, there's nothing like actually testing out how your application handles failure. Most large applications are designed to handle different operating conditions and can adapt to the number of resources available. 

For example, an application serving the continental US might need fewer resources when most of the people in the area are asleep. In such a system, how would the application react to suddenly losing 25% of its resources during peak operating hours? Would it slow down? Would it crash? Is it able to degrade its services and provide the expected levels of service? 

These are questions that can be answered by using resilience testing tools like Chaos Monkey and Gremlin. Tools like Chaos Kong (a part of the Chaos Simian suite) will let you simulate the loss of an entire AWS region or availability zone. 

Must-Have Microservices Tools

You're only as good as the tools you use. This definitely holds true when working with microservices. You will need to pay special consideration to monitoring tools, as your application is now distributed and running inside multiple nodes. 

Monitoring and Observability

Monitoring tools give you insight into what's happening within your application. With a monolithic architecture, a majority of the service calls are within the application itself. This means that generating a stack trace and identifying a problem is much easier. 

However, with a microservices architecture, your service might just be one in a chain of remote service calls. You can lose context if you don't use the appropriate tools to trace your call stack. 

Open-source tools like Zipkin and Jaeger will allow you to follow the request path across multiple services. Additionally, paid platform services like New Relic and Sentry allow you to monitor across services and gain more insight into individual service performance metrics. 

Service Mesh

If you're working with a microservices architecture of any decent size, I highly recommend that you use a service mesh. A service mesh is a layer that sits on top of your infrastructure and aids with the connectivity, security, and observability of your services. 

For example, Lyft's Envoy Proxy is a cloud-native edge and service proxy that can both act as an application proxy and run in sidecar mode. Let's take a look at an example.


In this illustration, you have three service containers running with a sidecar. On deployment, each sidecar only knows how to connect to its companion service and the service mesh controller. Once the sidecar connects to the controller, it exchanges information about how it can be contacted (e.g., IP and port). It also fetches information about other services available. This way, each sidecar is able to connect to every other service in the mesh. The service itself only needs to know how to connect to its own companion sidecar. All the complexity is hidden away in the service mesh, not in your application. 

Sidecars can monitor their companion service and report on health and other performance metrics. You can collate all this information and then display it on a central dashboard that shows overall performance and health metrics. 

Are Microservices the Way Forward?

You can find out more about whether the microservices architecture is a good fit for you. Once you've made yourself familiar with the theory behind it, you can apply some of the techniques discussed here. 

A good place to start would be to separate a standalone functionality and deploy it as a microservice within your application. Play around with it a little bit and keep growing the number of services deployed. Soon enough, you'll have a few services running and things should come into focus for you. Good luck!

Download our free eBook

Mastering Software Delivery with Value Stream Management

Discover the key to optimizing your software delivery process with our comprehensive eBook on Value Stream Management (VSM). Learn how leading organizations are streamlining their software pipelines, enhancing quality, and accelerating delivery.

Deliver Better Software Faster with Plutora

Deliver Better Software Faster with Plutora

Deliver Better Software Faster with Plutora