What are microservices? A flexible, modular way to deliver apps

Modern software development practices require rapid, scalable delivery in response to commonly unpredictable and volatile IT conditions and requirements. Microservices architecture is one viable solution for this very complex problem.

Building applications by splitting up resources into microservices — rather than maintaining a monolithic codebase and resource pool — allows developers to keep pace with innovation and increasingly disruptive business environments. A 2020 survey found that 61% of organizations had been using microservices for a year or longer. Let’s take a deeper look at microservices and microservices architecture.

What are microservices?

Microservices are flexible, lightweight, modular software services of limited scope that fit together with other services to deliver full applications. This method of structuring, developing, and operating complex, multi-function software as a collection of smaller independent services is known as microservice architecture.

Using a microservices approach, DevOps teams split services into functional APIs instead of shipping applications as one large, collective unit. API interfaces connect services with core functionality, allowing applications to communicate and share data. A collection of independent services working together to perform a business function make up an application.

One primary advantage of microservices: The DevOps teams responsible for developing and maintaining them are smaller, making the scope of each project more manageable.
Here are a few common features of microservices:

  • Highly maintainable and testable: Supports agile development and rapid deployment of services.
  • Loosely coupled: With minimal dependencies, changes in the design, implementation, or behavior in one service won’t affect other services.
  • Autonomous services: Each service internally controls its own logic.
  • Independently deployable: Code can be written in different languages and can be updated in one service without affecting the full application.
  • Focused on delivering business value: Microservices are deployed based on business demands and can be used as a building block for additional deployment.

Microservices are run using container-based orchestration platforms like Kubernetes and Docker, or cloud-native function-as-a-service (FaaS) offerings — including AWS Lambda, Azure Functions, and Google Cloud Functions — to assist in managing and automating microservices.

What is microservice architecture?

Microservice architecture is a cloud-native architectural approach used to build applications using independently deployable microservices. It’s like a newer version of service-oriented architecture (SOA), a term coined in 1998.
Here are a few unique characteristics of a microservices architecture:

  • Microservices architecture is organized with broad-stack implementation of software by cross-functional teams that have development, database, and user-experience skills.
  • Microservices architecture is componentized using services that are broken down into independently replaceable components as opposed to libraries.
  • Microservices architecture utilizes smart endpoints and dumb pipes as communication patterns between different services separated by hard boundaries, with dumb pipes (message router only) and the endpoints producing and consuming messages (smart endpoints). Users are also adding service meshes to increase the intelligence of their pipes.
  • Microservices architecture keeps services independent, with each function of an application operating as a single service that can be independently deployed and updated.

To fully understand microservices and microservices architecture, it’s helpful to have a basic understanding of the monolithic approach that preceded them.

An example of microservices architecture and design.
An example of microservices architecture and design.

What is the monolithic approach?

Monolithic architectures structure software in a single tier. Like a stone pillar, the resulting product is large, uniform, and rigid. Monolithic software systems employ one large codebase (or repository), which includes collections of tools, SDKs, and associated development dependencies. There is nothing inherently wrong with building a service as a single application — it can be appropriate for many common use cases, such as automating a shipping label or charging a credit card. Giants like Google and Microsoft once employed monolithic architectures almost exclusively.
With monolithic architectures, components all coexist in a single deployment. One large team generally maintains the source code in a centralized repository visible to all engineers, who commit their code in a single build. These teams typically use standardized tools and follow a sequential process to build, review, test, deliver, and deploy code.

Common problems encountered with the monolithic approach

Problems arise when demand for the application grows or new requirements are added. Since monolithic software systems employ one large codebase repository — that includes collections of tools, SDKs, and associated development dependencies — the service becomes a behemoth piece of software that is labor-intensive to manage.
Even if one team built it, a year later, three separate DevOps and IT teams may be responsible for maintaining it.
It doesn’t take long for most organizations to find the monolithic approach too slow to meet demand and too restrictive for developers. This is true even in heavily regulated industries such as banking or government agencies. A massive codebase can also suffer from instability issues and bugs that can directly impact other shared systems.

Microservices vs. monolithic approach

As demand for services continually increases, and containerized applications are deployed across multi-cloud environments, organizations needed a more flexible, cohesive approach to designing and developing a distributed systems architecture.
One advantage to using microservices is that it gives developers more flexibility by allowing them to use whatever programming language or framework they prefer. This freedom of choice helps to prevent employee churn and the need for outsourced talent. Designated teams within an organization own various APIs and applications throughout their life cycles and become leading in-house experts on the technologies they develop.
On the deployment front, smaller teams can launch services much faster using flexible containerized environments, such as Kubernetes, or serverless functions, such as AWS Lambda, Google Cloud Functions, and Azure Functions. Accordingly, applications don’t have to wrestle over shared resources during runtime.
Here’s a brief comparison of a microservices approach versus a monolithic approach:

  • Monolithic software only communicates to systems within its own boundaries, whereas microservices-based software can direct calls to multiple services and repositories simultaneously. This distributed approach of microservices architecture gives applications more flexibility and the ability to spread processing requests across many resources, which can reduce bottlenecks.
  • Monolithic development can be painfully slow and cumbersome, especially when the application codebase becomes unmanageable.
  • Microservices development is intrinsically optimized for continuous integration and continuous deployent (CI/CD) processes that apply agile development best practices for rapid delivery of reliable code.

Benefits of a microservices architecture

Microservices architecture helps DevOps teams bring highly scalable applications to market faster, and it’s more resilient than a monolithic approach. In a monolith, one service failure simultaneously impacts adjacent services and causes performance delays or an outage. With microservices architecture, each service has clear boundaries and resources. If one fails, the remaining services remain up and running.
Here are a few other benefits of a microservices architecture:

  • Flexible architecture
    • Microservices architecture enables developers to use many different images, containers, and management engines. This flexibility gives developers tremendous latitude and configuration options when creating and deploying applications.
  • Uses fewer resources
    • Microservices tend to use fewer resources overall during runtime due to the cluster manager.
    • The cluster manager automatically allocates memory and CPU capacity among services within each cluster based on performance and availability.
    • Since each cluster has a large number of services, there are fewer clusters to manage.
  • More reliable uptime
    • There is less risk of downtime because developers do not have to redeploy the entire application when making an update to a service.
    • A small service codebase makes it easier for a developer to troubleshoot a problem (MTTD) with faster time to recovery (MTTR).
  • Simple network calls
    • Network calls made using microservices are simple and lightweight because the functionality of each service is limited.
    • REST API — or RESTful API — are the protocols, routines, rules, and commands used to build microservices. REST API is popular with developers because it utilizes easy-to-learn HTTP commands.

Challenges of a microservices architecture

Reaping the benefits of a microservices architecture requires overcoming a few inherent challenges. For example, a long chain of service calls over the network can potentially decrease performance and reliability. When more services can make calls simultaneously, the potential for failure compounds for each service — especially when handling large call volumes.
Automated detection and testing can help to streamline and optimize these API backends and prevent potential IT downtime.
Here are some additional challenges that can arise with a microservices architecture:

  • Steep learning curve: The learning curve and logistics for initial setup can be a challenge, as configuring images and containers can be tricky without previous experience or expertise.
  • Complexity: Fragmentation, while a unique benefit of microservices, means there are more pieces to manage and own. In response, teams must adopt a common language and implement automated solutions to help them manage and coordinate the complexity.
  • Limited observability: With many dynamic services managed throughout disparate environments, maintaining adequate observability into systems can make monitoring a pain point. Manually pulling metrics from a managed system such as Kubernetes can be laborious.
  • Cultural shift: The culture shift required of microservices architecture is an advantage in that it requires team members to think more efficiently and modularly, but it also requires time and commitment for teams to make the transition smoothly.

What tools do you need to run a microservices architecture?

There is no set way to design a microservices architecture or mandatory set of tools needed to run it. However, there are a few core components that are commonly found in most systems using this architectural style. These include:

  • Containers: Container development and orchestration environments such as Dockers and K8s are the preferred method for running microservices because they are more lightweight and portable than virtual machines, and make it easier to manage containers at scale.
  • API gateways: These servers handle all requests from the client and then route them to the appropriate microservices, helping to keep individual services more lightweight.
  • Serverless platforms: Serverless cloud-native platform offerings by Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) provide the server infrastructure and are one option for building and deploying microservices.
  • Service mesh: A service mesh is a dynamic platform messaging layer built on top of the infrastructure layer that encrypts data, performs load balancing, and controls service-to-service communication requests.
  • Systems monitoring and alerting: Monitoring and alerting tools and protocols help to simplify observability for all your custom metrics. Three popular open-source options are:
    • Prometheus (stores metrics in a time series database)
    • StatsD (vendor-independent industry standard for real-time monitoring)
    • Telegraf (plugin-driven server agent used to collect, process, aggregate, and write metrics)

5 best practices for microservices

When developing a microservices architecture, these five best practices are helpful to include during all stages of the process.

  1. Make ownership of each component of equal importance, with each team member playing a critical role in all phases of the application development life cycle.
  2. Clearly define CI/CD processes to help development processes run efficiently, with every team member capable of deploying an update to production.
  3. Use asynchronous communication to achieve loose coupling and reduce dependencies between services so that a change in one will not affect application performance and end users.
  4. Test early and often using multiple methods, including testing an instance of one microservice to test a separate service.
  5. Bake application security into all phases of development — starting with design then continuing through DevSecOps. This is critical because numerous calls will be made over the network and more intermediary systems are involved in each instance.

How do you monitor microservices?

Effectively monitoring microservices involves having automatic and intelligent observability into all your cloud-native and container environments and every resource microservices interact with. This observability provides insight into the overall health of an application by evaluating the performance of each service providing a specific function. It can also give a comprehensive understanding of the availability and performance of each API transaction used to connect services, so anomalies can be detected in near-real time.

After deciding which monitoring approach is best suited to meet your business objectives and what metrics to measure, keep in mind the following goals:

  • End-to-end observability: Implement observability across the full stack so that cross-functional teams can identify and remediate performance issues in near-real time.
  • Real user monitoring (RUM): Record all user interactions to evaluate the performance of applications.
  • Auto-discovery: Automatically detect all applications, services, processes, and infrastructure at start-up.
  • Automatic instrumentation: Harness automation to address unknown unknowns and eliminate the need for manual coding, to avoid errors that can occur when manually monitoring and measuring the performance of an application.
  • Distributed tracing: Track requests as they flow through a distributed system to monitor, debug, and optimize services and the microservices architecture.

Microservices managed

To manage the complexity of microservices and microservices architecture, DevOps and IT teams need a solution that puts automation and observability at the forefront of microservices monitoring and management.
Dynatrace delivers broad end-to-end observability into microservices and the systems and platforms that host and orchestrate them — including Kubernetes, cloud-native platforms, and open-source technologies.
Fueled by continuous automation, the Dynatrace AI engine Davis helps DevOps teams implement the automatic detection and testing required to mitigate or eliminate reliability issues with complex call chains. With AI and continuous automation, teams can easily discover where streamlining is needed.

Learn more and watch the magic in action at the on-demand webinar exploring PurePath, Dynatrace’s patented technology for end-to-end observability into microservices, serverless applications, containers, service mesh, and the latest open-source standards such as OpenTelemetry.

Stay updated