6 Key Load Balancer Performance Metrics You Should Track

by Gabi Goldberg on Tips and Tricks • March 5, 2019

A load balancer is essential for performance, scalability, and security but an ADC also monitors your endpoints, providing essential performance metrics.

Chief Information Officers, Chief Technology Officers, and IT managers of all breeds are finding themselves at the sharp end of business change, alongside operational leaders to help in making key decisions and deliver customer value. But with this new responsibility comes the need to evolve the metrics that they use to inform their decisions and ensure alignment with the core strategic priorities of the business.

One of the key ways that IT departments have supported business growth has been in their ability to facilitate and manage access to the cloud for data storage and computing services that are shared in a virtualized environment. In fact, total IT spending on infrastructure for cloud deployments is projected to experience a 10.8 percent growth rate over the next five years, meaning that the value of cloud services will continue to be under the microscope, which, in turn, brings into focus the role that load balancing plays in handling user requests.

Gone are the days that just monitoring workload and utilization would cut it to ensure the smooth operation of the business. Regardless of which cloud service you use, these are the key load balancing metrics you need to be tracking on your dashboard to ensure ideal utilization and to maintain consistent system performance.

1. Request Counts

Whether, as a total sum of all the requests coming in across all load balancers or on a per-minute basis, monitoring request counts can help your organization understand more than you think. Total requests can be correlated with the number or type of users your services support within a normal range, but they can also signal issues with routing or network connections further along with the network if the total requests fall outside certain bounds. Requests per minute by load balancer, on the other hand, can provide a view into how well the load is being balanced across the system.

2. Active Connection or Flow Counts

Linked to the request count metric is the active connection or active flow count metric. Displayed as an average or maximum, monitoring active connections between clients and target servers can help to determine if the right level of scaling is in place and, at the load balancer level, if work is being appropriately spread across your network. Further analysis can be done to monitor how smoothly flows are running outbound and inbound as well as at the IPv6 or IPv4 level.

3. Error Rates

Like latency and request counts, tracking error rates either over time or on a per-load-balancer level can provide a view into how well your services are running. Tracking front-end errors that result from requests returned to the client can point to configuration errors, and back-end errors can point toward communication issues between the server and the balancers.

New call-to-action

4. Latency

Latency is the amount of time between the load balancer receiving a request and returning a response. Latency is one of the top metrics to keep your eye on because its value can be directly linked to users’ experience. If latency is too high, applications or websites could run slowly, which can frustrate users as they, too, lose productivity or move over to other services. Latency can be monitored on a per-load-balancer basis to identify potential problem performers or over time as an average to judge user experience.

5. Number of Healthy/Unhealthy Hosts

The number of healthy, active hosts available for each load balancer can be used to monitor risk for service outages. Thresholds for the number of unhealthy hosts revealed by system health checks can be used to set alerts for proactive maintenance or troubleshooting before users notice service unavailability or latency.

6. Rejected or Failed Connection Count

Whether your system reaches capacity by being overwhelmed by the sheer number of requests or because there are not enough healthy load balancers available, monitoring the number of failed or rejected connections can be revealing. In addition to getting an idea of the number of users that may be turned away, failed connections can help to get an idea of whether the right level of scale is in place or if abnormal activity on your network should be further investigated.

Going to the Next Level with your Load Balancer Monitoring and Metrics

The promise of cloud services and the network load balancing that enables them for organizations has certainly come to be a reality. Yet, through the proper use of metrics, IT leaders can further improve the reliability and performance of their services. Pair these metrics with Snapt’s software-only application delivery controller (ADC) and its built-in web accelerator and firewall, and your organization can ensure your business-critical services stay online, run securely, and perform quickly.

 

Nova includes a high-performance, high-availability load balancer, GSLB, and WAF. The Nova Cloud provides centralized control, observability, analytics, and automation.

Aria includes a high-performance network load balancer, web accelerator, WAF, GSLB, high availability, and one year of technical support.

 

Book Product Demonstration