Microservices architecture is the new black when it comes to designing software applications. Developers are abandoning monolithic application structures and adopting microservices, particularly for very large applications, because they are more flexible, highly scalable, easier to manage and can provide the agility that DevOps teams require.
But as more applications are decomposed into microservices, which all need to communicate with each other, this is putting considerable strain on traffic loads in the data center. Traditional hardware load balancers and Application Delivery Controllers (ADCs) that were not designed to cope with microservices are struggling to keep up and are slowing down DevOps activities. The surge in data center traffic resulting from the rapid adoption of microservices demands a new approach to Application Delivery Controllers (ADCs) that is optimized for “East-West” load balancing.
Microservices Fuel Modern DevOps
Microservices refers to a modern architectural approach to building and deploying software applications. In this architecture, applications are broken down into a collection of services that are independent of each other and can be loosely coupled together. This architecture makes it easier to test and maintain the services and enables services to be organized around business features. An easy way to think of microservices is as modules that can be added, changed and removed without having to make major changes to the whole application.
For example, a web host might have a cluster of web servers (nginx), a separate cluster of key-value stores (Memcached), and a traditional database cluster (MySQL). With a microservices architecture, these would operate independently as distinct services, each with its own redundancy, resiliency and spec. Most importantly, each one could be moved, deployed or altered independently and without affecting the other services. For example, if the key-value store has an application programming interface (API), the web host could change what powers it (i.e., change Memcached to Redis) without needing to change much else.
Microservices have accelerated the pace of innovation for DevOps teams. Developers simply deploy code faster with microservices. One small change doesn’t require rewriting an entire application, as it might in a monolithic architecture. Developers can also rapidly test the changes and see the immediate business impact. Microservices are key to providing the flexibility and agility that DevOps teams need for continuous deployment and to align more closely with business goals.
As microservices proliferate, it’s critical that the traffic loads they create in the data center are properly managed so that nothing impedes the speed of development and business operations.
Why East-West Traffic Load Balancing Matters
There are two basic traffic patterns inside a data center, cloud or virtual machine cluster. “North-South” traffic passes between the internal network and external networks – for example, a user browsing the Internet on an external network requests a page from a web server on the internal network. “East-West” traffic starts and ends within the internal network – for example, traffic between servers or between one VM and another. If a web server needs to query a database server, that would be East-West traffic.
North-South traffic requires North-South load balancing, where an ADC routes traffic from clients in external networks to a set of destination servers in the internal network. East-West traffic requires East-West traffic load balancing, where an ADC manages traffic between microservice components within the internal network.
The graphic below illustrates the difference: the green lines represent North-South traffic and red lines represent East-West traffic.
While microservices give developers the agility they need, they can leave an operations team with the challenge of designing reliable communication between all the many services or modules.
Uber has a great diagram illustrating how they changed their architecture when they switched to microservices. The previous model is on the left and the new model is on the right. Note how the new architecture has increased flexibility and removed potential bottlenecks and single points of failure. However, with many more connection points between components, overall internal traffic increases and the network becomes more complex.
These changes need to be managed so that the network operates efficiently and potential failures are avoided. Ensuring the availability of microservices is imperative, especially as we move toward platforms like Kubernetes where individual servers in a service cluster may be terminated.
We need to consider the redundancy or high-availability requirements of the cluster. Do you need to do blue-green deployments? What notifications and APIs do you need? And can you tolerate a failure? For example, a key-value store that caches content from a database may be allowed to "fail" during a deployment, as the data will then be fetched from the database; however, it might not be acceptable to allow downtime from the database itself.
That’s where East-West traffic load balancing comes in. All these challenges can be addressed with the right East-West load balancing capabilities and, in more high-end environments, with a modern ADC. In addition to load balancing, modern ADCs provide valuable telemetry data – for example, they measure the response time from web servers, how many connections are active on a database cluster, and so on.
East-West load balancing helps ensure that microservices can communicate between themselves and that each component in a cluster has the proper reliability assurances.
Limitations of Traditional ADC Approach
The move to microservices and the demand for East-West traffic load balancing highlights the limitations of hardware ADC solutions and traditional ADC software offerings.
For starters, many ADCs hit their performance targets by using far more compute resources than is typically available in a lightweight microservices implementation. Some require 16 cores to run, which is not cheap to spin up in the cloud. Running 500 16-core machines is simply a non-starter. In sharp contrast, Snapt performance guidelines are based on a single core with 2GB of memory ¬– we call this "commodity hardware" – and can deliver 100,000 Layer 7 requests per second.
Also, traditional ADCs and their vendors’ business models don’t scale well with the increasing traffic demands and disaggregated architecture of microservices deployments. DevOps teams running 50 load balancers, each managing 1/50th of the traffic, need a very different licensing structure compared with buying one large load balancer.
And finally, not all ADCs can integrate with services automatically and be controlled easily via APIs.
These limitations often lead to bad design and problems such as:
- Having one device serve as both a database and a North-South load balancer, causing DMZ firewall issues and blue-green issues.
- The inability to keep up the pace due to change control from non-related assets.
- Extreme costs that hurt the business or limit the ability to deploy.
- Pre-production, quality control and development environments not matching production environments.
Modern ADCs Prioritize East-West Traffic Load Balancing
Rather than trying to shoehorn old-world solutions into the new world of microservices, we need a new approach to ADCs. Modern ADCs should address the unique challenges arising from microservices architectures, including East-West traffic load balancing with a lightweight footprint, broad compatibility and open APIs.
With a proper East-West load balancing strategy, and the right ADC, DevOps teams can ensure the availability of the services they manage, and benefit from detailed monitoring and analytics, self-managing and self-scaling components, and the ability to make changes easily in a controlled environment.
To keep DevOps teams happy and meet the demands of East-West load balancing, look for in an ADC with the following attributes:
- It should be lightweight. Measure the performance per CPU core and check the memory requirements.
- It should be easy to control via API, scripts, deployment tools, etc.
- It should be able to provide service discovery for your systems.
- It should be able to run in any cloud or VM environment to allow for growth and future migrations as organizations tend towards multi-cloud deployments.
- It should have a licensing model that makes sense at scale.
- It should provide detailed telemetry information for your services: microservices don't fail when they go offline, they fail when they slow down!
Traffic patterns in the data center are surging, spurred by the rapid growth of microservices. Without robust East-West load balancing, the increasing traffic load will hinder DevOps teams and limit the benefits of adopting microservices. An ADC that focuses on East-West load balancing ensures that DevOps teams can leverage the flexibility, agility and scalability that microservices deliver.
To learn more about how Snapt’s software load balancer addresses east-west traffic, download the free trial or book a demo with us today.