Why You Should Switch to a Cloud-Native Software ADC

by Bethany Hill on ADC • March 17, 2021

Hardware is comforting. When you own a hardware application delivery controller (ADC), you know that you control it—the box, the NIC, the power supply, the whole deal. This control has been the primary attraction for spending seven figures to plunk a giant ADC in front of your entire application infrastructure. That said, the world has changed—and then it changed again.

At one time, the debate was between whether to have a hardware or a software ADC running on a big VM. (Spoiler: the two are not that different.). Now, the question is: are you better off running a distributed cluster of cloud-native ADCs living in containers deployed in any cloud, rather than any large physical or monolithic ADC?

Here are some considerations on that decision for application architects and site reliability engineers (SREs).

Scalability: Buy a New Box, License a Big VM, or Spin up a Cloud ADC Cluster

Hardware Appliance

If you need more scale and you are using a hardware ADC, then that means planning a hardware install that could significantly disrupt operations and require months of planning. On-the-fly scalability and capacity additions are out of the question.

Software Appliance

If you are dealing with a larger-footprint VM-based software ADC built with monolithic architectures, you should be able to get a new license cleared within a few days, install the VM, and have it running within a week. However, you still cannot easily scale on-the-fly, and you might pay a hefty penalty for going beyond your licensed requests per second or bandwidth consumption.

Cloud-Native Software

You can scale by merely adding container-based ADCs as needed, usually within a matter of minutes. This ADC design functions as a true control plane/data plane separation. It also enables the offloading of numerous key tasks to the control plane service running in the cloud, like a SaaS, but without costing you money. This architecture and structure allow for better scaling to handle transient but impactful scaling needs, such as responding to a DDoS attack, a product launch day, or Black Friday shopping.

Cloud-native software also supports both vertical and horizontal scaling: you can scale up the capacity of your individual ADCs or scale out by adding more ADCs for better geographic distribution or more resilience and redundancy.

Cost: Pay for “Worst Case” vs. “As You Go” (CapEx vs. OpEx)

Hardware Appliance

You pay a fixed cost for the physical appliance, often in the six or seven figures. That is a sunk cost. Because scaling out or up with hardware ADCs is impossible, the only way to handle capacity spikes is to over-purchase and hope that your application doesn’t exceed demand. This can mean big additional costs if your game goes viral or your entire company goes to remote work when a pandemic strikes. In addition, hardware constitutes a capital expense (CapEx) and implies full depreciation, something your finance team may wish to avoid.

Software Appliance

Costs are somewhat more flexible, but it is hard to shift licenses, which are usually structured on annual terms. In fact, you might pay more for a software-based ADC than a hardware model in some instances if cloud hosting fees are included and you are using a public cloud or a VPC operated by one of the large public clouds. Expenses can go up if you are trying to protect multiple clouds: using a large ADC in one cloud means you can’t easily route traffic through that ADC to protect your application running in another cloud. Because the license is annual, this tends to be viewed as a CapEx rather than an operating expense (OpEx) and is less preferred by finance teams. Rigid license terms also mean that planning is more “worst case” than “on-demand.”

Cloud-Native Software

You pay for what you need and can control costs on an hourly basis. This pricing and deployment model allows you to tune ADC deployments to match actual application usage. You can also flex your ADC capacity based on expected traffic without having to pay annual fees or plan out months in advance to move hardware into a cage. For finance, cloud-native ADCs tend to be viewed as OpExs and are treated almost as services; because they are so transient, there is no depreciation and no decay in value. You only pay for what you use, and you can either pay as you go (in case of an emergency) or purchase lower-cost capacity by using spot and reserved instance pricing on the public cloud markets.

Uptime / Reliability / Resilience

Hardware Appliance

Uptime is binary. If you are down, you are down, and the only option is to deploy another physical ADC. This forces companies to deploy N+1 architectures and have at least one expensive and high-capacity ADC idling to handle a complete failover. This also decreases resilience because, in an outage, the failure point shifts to the right but remains present. If the backup goes down, your application will go down for an extended period.

Software Appliance

Unfortunately, this is not much better than hardware, even though you are not moving tin into a rack. To spin up a new software ADC with a monolithic architecture, you need to secure the license. That requires an actual procurement process with signatures and purchase orders.

To ensure uptime and resilience, you have to purchase licenses in excess of your required capacity. If you wish to run an application in multiple clouds, then monolithic ADCs can cost you dearly because you will need to ensure resilience commensurate to potential usage in every cloud. This can be particularly tricky if you are using one cloud as a primary and another as a secondary and backup. If you take this step, then you might have your new ADC up and running very quickly—in a matter of hours. But that is an eternity on the timescale of modern applications.

Cloud-Native Software

This type of ADC is based on containers, usually in large clusters of managed nodes, so it is designed to be deployed in a fully resilient fashion. One node going down has little impact on total availability or performance. A container-based ADC that is cloud-native can usually spin back up and handle traffic in a matter of minutes. What’s more, a node can be quickly replaced because containers are designed to be plug-and-play. This is, in fact, the convention of Kubernetes. This flexibility ensures the highest possible uptime and, ironically, at a much lower cost. It also allows companies to eliminate any single point of failure in the ADC tier.

Security and Monitoring

Hardware Appliance

These ADCs are tightly coupled, so they tend to rely on their own monitoring dashboards and applications, which are hard to integrate with more general monitoring tools, such as Sysdig, Datadog, and New Relic. Many hardware ADC providers charge significant upgrade fees for advanced monitoring packages. Few, if any, have useful out-of-the-box APIs that can integrate into other monitoring tools. And if some of your application tier are in a public cloud, you will have to potentially maintain three different monitoring applications across your hardware ADC and non-cloud and cloud infrastructures.

Watching multiple screens and operating in multiple monitoring solutions taxes security operations teams. This also negatively impacts security because it is cumbersome to update policies and web application firewalls (WAFs) in hardware systems with the latest attack information. This means that if you are running a hardware ADC, you are at a greater risk of attacks that are zero-day or novel. Last, SecDevOps with hardware ADCs is impossible: there is no way to test the impact of an application running in a staging environment with traffic passing from an ADC or using ADC features such as SSL offloading and application acceleration.

Software Appliance

Many of the same problems found with hardware ADCs apply to the security and monitoring of monolithic software ADCs. These ADCs are often forklifted from their origins in a hardware appliance into a VM footprint, so they tend to look and feel the same as hardware ADCs, which extends into security and monitoring. Monolithic software ADCs are challenging to integrate with third-party monitoring tools and platforms, such as New Relic or Prometheus (for Kubernetes and cloud-native).

As with hardware, software ADCs are hard to update with new information on malicious hosts and have policy engines designed for infrequent updates and rule changes. This exposes the applications behind them to zero-day attacks and other fast-moving threats. Running two of these software systems in different clouds is a monitoring nightmare. Public clouds tend to have different APIs, different scripting conventions, and other differences that force security and monitoring teams to effectively maintain two operations playbooks for monitoring ADCs. And good luck integrating this across multiple public clouds if you want to create a single source of monitoring truth.

Cloud-Native Software

The benefits of cloud-native for security and monitoring are pretty obvious. Cloud-native ADCs abstract the public cloud and its settings and conventions away from the monitoring dashboard and commands so operators can overlay a single monitoring layer atop multiple clouds. Cloud-native ADCs also allow for hybrid clusters that include legacy software ADCs and cloud-native ADC nodes.

Cloud-native ADCs are containerized, so it is simple to spin up a small container alongside a large legacy ADC running in a VM. That container will retain the same agents and connectivity to link with the control and service planes used to orchestrate and monitor clusters of ADCs running in the public cloud.

Cloud-native software ADCs are widely distributed and linked via APIs designed for frequent updates and policy changes to the WAF and other systems. This design allows SecDevOps teams to take advantage of new integrations with threat intelligence tools to automatically block threats from newly identified malicious hosts without requiring manual pushes of new firewall rules or port-opening policies.

The distributed nature of cloud-native ADCs also enables the better deployment of modern zero-trust architectures. Zero trust works poorly in more centralized networks because of the high volume of round trips required to the monolithic hardware or software ADC, which can introduce application performance issues.

Conclusion: Distributed = Lower Cost, Better Performance, and Easier Monitoring

Old habits die hard. Knowing that you own an ADC and have a fully dedicated appliance serving your applications made sense in the pre-cloud and pre-cloud-native eras. The shift from hardware to monolithic ADCs was a reasonable solution in terms of flexibility, resilience, and agility. Today, companies such as Airbnb, Netflix, and Walmart have all adopted and embraced cloud-native and distributed components for their entire application stack. This gives them a more resilient, secure, and agile application tier. Cloud-native ADCs are a logical extension of this trend. They afford operations, developers, and security teams easier scalability, lower costs, higher uptime and resilience, and simplified monitoring and security.

Snapt can help you future-proof your business by making the switch to cloud native. Chat to our team today, or try out our cloud-native solution for yourself.

Try Nova Free