Service mesh 101: What is a service mesh?
Microservice architectures are becoming more and more popular in enterprise organizations as they allow for greater agility with smaller, more targeted services — compared to a monolithic architecture that’s difficult to develop and maintain. However, this type of architecture doesn’t come without its challenges.
As organizations build more microservices, complexity grows. The governance and security considerations behind microservice interactions are often custom-coded into the service logic. Teams build in different languages and deploy to multiple environments, and an organization’s services are typically siloed with decentralized management.
The concept of a service mesh has been introduced to address the challenges that come with microservice implementations. A service mesh can abstract the governance considerations behind microservices that primarily interact with one another.
What is a service mesh?
A service mesh is a software architectural pattern used for microservices deployments that uses a sidecar proxy to enable secure, fast, and reliable service-to-service communications. Most service mesh offerings, such as Istio, are deployed into a Kubernetes cluster. While there are many open-source service mesh projects and other commercial offerings available, Istio has emerged as the defacto market standard.
How does a service mesh work?
Historically, organizations wanting to implement shared functionality had to choose between binding shared libraries into their microservices or inserting a centralized proxy into the architecture. The libraries would become a change management problem, while the proxy could lead to increased latency. A sidecar pattern enables an organization to have a local proxy that’s not bound into a service, providing cleaner separation and better maintainability over time.
Using this pattern, microservices within a given deployment or cluster interact with each other through sidecar proxy’s, or sidecars. These are lightweight reverse-proxy processes deployed alongside each service process in a separate container.
Sidecars intercept the inbound and outbound traffic of each service, and act according to the security and communication rules as specified by a control plane. The developer can configure and add policies at the control plane level, and abstract the governance considerations behind microservices from the service code, regardless of the technology used to build it. Common policies include circuit breakers, timeout implementation, load balancing, service discovery, and security (transport layer security and mutual authentication).
Service mesh as a concept has evolved over time. Initially, microservices adopters embraced the “smart endpoints and dumb pipes” principle for all functionality. They soon realized that it made more sense to provide system-wide policies for common capabilities like routing, rate limiting, and security. Netflix was the first to separate out the application networking layer, creating their famous OSS stack including Hystrix, Eureka, and Ribbon among others. This was followed by Envoy, a high-performance, distributed proxy originally developed at Lyft. Such technologies provided the foundations for the service mesh.
Challenges from microservices
Before microservices, enterprises used a monolithic approach to building applications — creating slower, less-reliable applications and longer development schedules. This led many organizations to evolve to using a microservice-based architecture to scale their applications alongside business needs.
Using a microservices approach, large complex applications can be divided up into smaller building blocks of executables that interact to offer the functionality of a highly complex application. Applications and services are broken down into smaller, independent services with strong network boundaries.
While it’s clear that the microservices design pattern has many advantages over developing software compared to a monolithic approach, it does come with its challenges:
- Secure inter-service communications: In order for a microservices-based solution to work, all services need to communicate with each other over network calls. Each of these network calls requires the appropriate level of access, authentication, and authorization (AAA). To further complicate the situation the AAA needs may differ from one network call to another.
- Traffic control and fault tolerance: A form of traffic management is needed to prioritize all your inter-service network calls. For a variety of reasons, some paths between services might not be available. In these situations, your network must cater for failure situations or fault tolerance.
- Management and monitoring: In a microservices architecture, services are owned and managed by multiple teams. These silos often result in inconsistent policy enforcement and governance. Furthermore, each of these teams might use a disparate set of DevOps tools for management and monitoring.
To solve these challenges, many organizations are forced to custom code governance considerations behind microservices into the service code itself. This complexity can stifle innovation and agility, negating the promise of microservices.
Benefits of a service mesh
Service mesh is already viewed as a crucial component in helping solve the operational challenges in managing and governing microservices. According to Gartner and IDC, companies deploying microservices to production will require some form of service mesh capabilities to scale.
A service mesh is used to abstract these governance considerations, regardless of the technology used to build. A service mesh is an independent architectural layer that allows for:
- Central governance and reliability over inter-service communication, with policies to handle traffic control.
- Consistent security with policies for authentication and authorization.
- Discovery of services into any existing app development and monitoring tools of choice.
Service mesh and API management
A service mesh does not solve all challenges in the microservices lifecycle on its own. Organizations still need a way to easily publish and reuse microservices across teams. Furthermore, a service mesh only provides these benefits to the set of microservices within a specific deployment chosen. Organizations need a way to centrally view and govern all their services, regardless of language or deployment model.
In the most ideal case, an organization can discover, manage, and secure any service in a single, unified platform. With Anypoint Service Mesh, customers can discover, manage and secure all services inside any Kubernetes cluster directly within Anypoint Platform.
By extending Anypoint Platform to any microservice, Anypoint Service Mesh allows customers to expand their application network to any service, Mule and non-Mule. Through Anypoint Platform’s single control plane, customers can now:
- Discover and leverage any service in any architecture
- Understand microservice dependencies using the application network graph.
- Maximize adoption and reuse by adding microservices to Anypoint Exchange.
- Centrally manage and scale
- Ensure resiliency across services with Istio traffic control policies.
- Measure and optimize performance across all microservices with API analytics.
- Enable security by default
- Ensure zero-trust with Istio and Envoy authentication and authorization policies.
- Add additional layers of security for consumer facing services.
To learn more about the role of API management in your service mesh implementation, download our whitepaper.