12 Benefits and Challenges of Multi-Cloud Architectures

by Bethany Hill on Snapt Nova • February 19, 2021

Gartner, Forrester, and all the major analysts are tracking a huge increase in interest in multi-cloud. A growing number of CIOs, architects, and IT teams are either considering or rolling out multi-cloud architectures for key applications. This is understandable. No one wants to get locked into a single cloud. Multiple clouds may offer enterprises a better way to deliver applications closer to their customers and end users. Going multi-cloud, however, is a complex decision involving multiple trade-offs that every CIO should consider before launching down that path.

What Is Multi-Cloud?

Multi-cloud is when an enterprise uses two or more public cloud computing services, such as Amazon Web Services (AWS), Microsoft Azure, or Google Compute Platform (GCP). Multi-cloud can also mean combining public and private cloud computing assets, such as a virtual private cloud (VPC) running in a public cloud or a hosted, dedicated private cloud running in your own data center or hosting environment.

Why are Enterprises Adopting Multi-Cloud?

Enterprises are adopting multi-cloud environments to more efficiently distribute computing resources, reduce risks of service interruption or data loss, improve security and DevOps through the standardization of tools and services not tied to a specific cloud, and take advantage of best-in-breed services. Multi-cloud allows architects and CIOs to deliver full N+1 capability across the entire application tier and lifecycle, simplifying compliance and business risk management.

The Benefits and Challenges of Multi-Cloud

A multi-cloud strategy can overcome the limits of redundancy, scale, cost, and features in a single cloud provider. Running in multiple clouds requires centralized visibility and control. Otherwise, your environment, resources, and policy will be fragmented. This creates additional complexity and cost through the multiplication of configuration, monitoring, and scaling tasks that cannot be automated centrally.

Create a Free Nova Account

Six Benefits of Multi-Cloud

There are many clear benefits to going the multi-cloud route, including proximity, reduced lock-in, data center selection, and the ability to pick the best service on the market for each aspect of your application.

Here is a quick rundown of the major benefits.

1. Proximity to Customers

While all major cloud providers have dozens of data centers, application architects want to run their application infrastructure as close to their customers as possible. Some cloud providers may have data centers that are closer to key customers or key markets. This is crucial. It means your application has less distance to travel and is likely to run faster. Also, in instances where there are problems with the global Internet, your application is more likely to continue to perform normally for your users because it has less exposure to those problems and can deliver locally quite well.

2. You Can Pick the Best Data Center... or Switch

Not all data centers are created equal. Some have newer hardware that may be more reliable and run cloud applications noticeably faster. Other data centers may have better connectivity to the global Internet with higher-capacity fiber links. A third consideration is who are the other tenants in a data center. If your game application’s EU presence is hosted in the same Ireland data center as a major pharmaceutical company that runs giant batch jobs of molecular analysis data in the late evenings, your application performance suffers due to the “noisy neighbors” phenomenon. All of these factors are usually tribal knowledge, and the cloud providers refuse to discuss it, but these factors are absolutely crucial considerations in real application performance. Having a multi-cloud strategy allows you to pick the best performing or connected data centers for a specific geography. Even better, it allows you to switch between data centers should the performance of one data center degrade.

3. Best-of-Breed Cloud Services

Say you have a lot of data that you need to back up, but you don’t care about how quickly you can access it. And you want to pay as little as possible because of the high volume of data. Then AWS Glacier might be your best bet. If you are running production applications in Kubernetes and you need your cloud provider to be able to bring your applications back up in minutes after failure, then GCP is likely to give you better performance. Likewise, for machine learning applications, GCP offers Tensor Processing Units (TPUs), an application-specific integrated circuit (ASIC) for processing artificial intelligence applications.

4. Best-of-Breed Services Outside the Cloud

The best service or tool for what you need may not exist inside of a single cloud. And if a cloud provider offers it, rarely will a tool or service work across competing clouds. In addition, when you are building an application for a single cloud, the easy choice is to adopt all the tooling and services found on the cloud for your application—even if these are not the best in the market. For example, all cloud providers have their own tools for providing visibility into application performance, CPU and RAM usage, and data storage. They even have different tools for different applications. For instance, GCP has StackDriver for Kubernetes logging and monitoring, while Amazon pushes users toward CloudWatch. These monitoring tools are limited and less flexible than more advanced monitoring or logging systems that DevOps teams build themselves quite easily with Prometheus and Grafana, which is quickly becoming the industry standard for monitoring and dashboarding. Another example: automation capabilities differ greatly from cloud to cloud, and some clouds integrate more tightly or better with specific DevOps tools for continuous integration and continuous delivery (Jenkins, Gerrit, Salt, CircleCI, etc.). With a multi-cloud strategy, you can pick your favorite services and tools to run across all your clouds.

5. Resilience and Faster Recovery

Many organizations that run multi-cloud systems create replicas of the entire application delivery, data tier, and compute tier across multiple clouds. This affords them N+1 or N+2 resilience and rapid failover in case one cloud goes down (such as when GCP was hit with an extended blog outage in August 2020 or when AWS allegedly lost customer data after a power outage fried hardware in a data center in 2019). If you elect to keep the application warm and running in multiple clouds, then failover can happen almost instantly and recovery from a potentially catastrophic failure is swift.

6. Reduced Vendor Lock-in

Running an application in a single cloud locks you in to what might be expensive business decisions. For example, in 2020, one of the major cloud service providers roughly doubled the price of one of their commonly used relational databases. For customers with data on that platform who had architected their application to run on that database, this represented an eye-popping bump in their infrastructure costs. Another risk of lock-in is if a cloud provider elects to reduce emphasis on a service or product and its feature sets don’t keep up with the market. In this situation, moving your application out of that service may be too expensive or time consuming, but you will be trapped with an inferior offering over time. Worst of all, a cloud provider may elect to end-of-life (EOL) a product or service. While providers always offer early notification of any EOL processes, dealing with a shutdown of a key service for your application can introduce unpleasant stress and complexity.

Six Challenges of Multi-Cloud

There are also a number of negative impacts that, if not properly managed, can result from a switch to multi-cloud. Increased complexity, lack of compatibility, and challenges with uniform security and monitoring are all common when an enterprise elects to go multi-cloud. Here is a rundown of some of the potential problems.

1. Increased complexity

One of the hidden costs of multi-cloud computing can be increased complexity. Each cloud provider has its own set of conventions, commands, and ways of doing things. For example, AWS has three types of load balancers, while GCP has six, each performing different aspects of load balancing. Some work at Layer 7 and others at Layer 4. One of the GCP products has an integrated firewall, which means it must be managed differently. Deciding how to load balance and secure your application in one cloud is complex enough given the multiple offerings. Doing so across multiple clouds introduces escalating complexity. Multiply this complexity across numerous services, and your organization could quickly be overwhelmed.

2. Differences in Service Capabilities Across Clouds

Some clouds are better than others for different things. We talked about that in the best-of-breed section above. At a more granular level, different clouds have very different capabilities, even in core offerings, such as computing, storage, and networking. For example, in theory, Kubernetes should be managed in the same manner across all clouds. In practice, each major cloud provider has key differences in how they manage Kubernetes, the monitoring and security tools for these managed offerings, and performance levels. So even when a service is, on the face of it, exactly the same and built on the same core technology, there may be key differences underneath that can have a real impact on performance, resilience, and how you architect an application.

3. Harder to manage costs

Cloud provider pricing has become harder and harder to understand and manage over time. Cloud providers charge for differences in computing, storage, networking, memory type (SSD vs spinning disk), region or zone, data transit, and number of requests. Let’s just consider load balancing. Most cloud providers charge not only for the size of the load balancer instance but also for the type of instance (spot or reserved), how many requests an instance will handle per server per second, whether the load balancer is moving data within a region or across regions, and how many rules a load balancer can apply to moving data. Charges for all of these aspects vary from cloud to cloud, and definitions of services do not even match up perfectly. For example, in GCP, there are different tiers for networking, which do not exist in Azure. Managing and forecasting costs in one cloud is challenging but often made more addressable by the providers’ management and projection tools. Across clouds, all of this breaks down, in particular if you are shifting your application infrastructure, data, or any other element between clouds on a regular basis to try to arbitrage costs.

4. Slower Time-To-Market on Major Application Changes

Deploying an application on multiple clouds can slow down shipping new features and functionality because you need to thoroughly test the changes in each cloud environment. In theory, the use of comparable applications at each tier running on containers should mitigate this risk. In practice, even containerized applications perform differently on different clouds. For mission-critical applications and capabilities, you need to budget time and expect more effort for testing.

5. Increased Security Risk and Attack Surface

Increased complexity can lead to increased security risk. In a multi-cloud environment, your security team will need to monitor twice or three times as many services running in different clouds. This makes it easier for attackers to disguise their attacks and go unnoticed. Security teams will also need to configure and test 2x or 3x more security appliances and tools. This creates a lot more stress on already overloaded security teams and increases the likelihood of human error resulting from a misconfiguration or a missed update. DevOps teams dealing with multi-cloud may get frustrated with the complexity and create workarounds that increase the attack surface area and add risk. In addition, data moving between clouds means more exposure and an increased attack surface.

6. Does Not Address Best-of-Breed Lock-in

If you are treating multi-cloud as a true N+1 solution with complete and completely separate application replicas running in multiple clouds, then lock-in risk is addressed. However, many enterprises adopt a best-of-breed strategy for different aspects of cloud computing. They may have AWS as their primary data store, for example, or GCP as their primary compute cloud for ML applications. In fact, it can be worse than that. You may have one team running its ML work in GCP and another in Azure’s. This papers over lock-in by fragmenting it at the service level.

How Multi-Cloud With Snapt Nova Solves the Challenges

A multi-cloud application delivery and security solution, such as Snapt Nova, provides the centralized visibility and control necessary to get the benefits of a multi-cloud strategy. Nova reduces complexity and cost by enabling seamless multi-cloud redundancy, autoscaling, monitoring and optimization, and “least-cost routing,” all from a single dashboard.

  • In a multi-cloud deployment, you need to enforce security policy in every cloud and limit your exposure to threats. Nova allows you to set policy centrally and propagate it to every node in every cloud simultaneously. This radically simplifies life for your security team and reduces the time spent manually managing rules and policies across clouds. Nova also improves security by reducing the attack surface and providing integrated security capabilities, such as firewall (WAF), threat intelligence data and signatures collected from a global network of deployed ADC.

  • In a multi-cloud deployment, you need to monitor the health and performance of every cloud and your application performance as a whole. Nova provides a holistic view of your network and applications in a single pane of glass, regardless of location, environment, or infrastructure. This reduces the complexity of building and maintaining a monitoring environment, and delivers clear apples-to-apples performance comparisons of different clouds.
  • In a multi-cloud deployment, you need to ensure seamless failover between clouds to achieve the highest redundancy and disaster mitigation. Nova monitors the health and availability of every cloud in your application architecture. Within five minutes or less, Nova can redeploy resources and reroute traffic to a healthy cloud in the event of a failure.
  • In a multi-cloud deployment, you need to scale in and scale out while controlling costs. Nova can autoscale resources across multiple clouds to meet demand and target the lowest cost provider in real time. Nova has no “warm-up” period and can scale up instance sizes in minutes in the event of a demand spike, outage, or security incident.

To be clear, an application delivery and security platform like Snapt Nova cannot address every challenge when going multi-cloud. For example, the risks carried by a best-in-breed approach can be mitigated only be careful planning and organization. That said, Snapt Nova does address the majority of multi-cloud challenges by reducing complexity and cost, and by improving security and control over applications, no matter whether you want to deploy in public clouds, or your own data center on VMWare. Snapt gets you closer to the freedom and flexibility of cloud agnosticism with a minimum of pain.

Try Nova Free