It’s increasingly common for online businesses to distribute web infrastructure across multiple data centers, or clouds, located in different regions and even in other countries. Rather than putting all their eggs in one cloud basket, deploying in multiple sites ensures web services stay up and running while also improving the performance of those services. Ever wonder how Amazon or Google can serve up requested content in mere seconds to users all over the globe? Part of the reason is that their infrastructure is globally distributed so that they can store and process content closer to where their customers are.
You don’t have to be a giant of the Internet to make the case for leveraging more than one data center site. Distributing web servers across multiple sites helps online businesses of all sizes. But it also creates some new challenges. Indeed, at first glance, the idea of duplicating and spreading your servers across many locations looks like a sure way to add cost, complexity and delay into just about every networking process of your business.
For example, how do you ensure the right content goes to the right region without adding latency? How is traffic re-routed in the event of a failure? And how can you see how all sites are performing at any given time?
Challenges In A Multi-Site Cloud
Here are some of the main challenges we see businesses encounter when they decide to expand into multiple data centers:
- Performance. Having disparate geographic locations can negatively impact performance by increasing web site response times, especially if traffic is randomly sent to different servers. To reduce latency and optimize performance, web site requests must be sent to the server that’s nearest to users and content must be delivered from a user’s nearest server.
- Relevant content. Users should receive content that is most applicable to them. If your business spans multiple countries, customers in Germany should receive German-language content, for example. Similarly, businesses that serve multiple regions within a county, like the U.S., should deliver local content relevant to customers in those regions.
- Redundancy. Multi-site server infrastructure is inherently redundant with backup sites and avoids the risks associated with having all servers located in a single data center. But you also need to have robust redundancy mechanisms in place so that traffic is re-routed intelligently and efficiently at the first signs of failure at one site.
- Regulatory compliance. Rules for storing content and user data varies from country to country. Complying with local laws and regulations adds a layer of complexity to managing traffic across your international infrastructures.
These challenges require intelligent, automatic traffic routing decisions. And this is where Global Server Load Balancing (GSLB) comes in. GSLB essentially does load balancing across geographically distributed servers wherever they are located. It’s the most cost-effective way to deliver content that is closest and relevant to users, while also ensuring performance and high availability of applications.
How GSLB Brings Smarts To Multi-Site Clouds
A software-based GSLB solution allows you to easily set up co-location in any part of the world to support your business. After initial, simple configuration, the GSLB software intelligently makes nearest host routing decisions. Based on Domain Name System (DNS), the software uses GeoIP routing to ensure that user traffic is always sent to the closest servers, which accelerates website response times. So, users in the U.K. are directed to U.K. servers, and users on the West Coast of the U.S. are sent to the nearest server in their region.
This intelligent routing also enables the most relevant content to be sent to users. French users will receive French-language content, for example.
For redundancy configuration, you can also set parameters for weighted routing and backup policies so that the system knows when and where to re-route traffic in the event of service degradation at one site. This is imperative for business-critical applications, such as Microsoft Exchange. But it is just as important for achieving high performance and reliability for customer-facing sites and applications. Policies can also be set for compliance with local data regulations.
Importantly, the redundancy policies must be supported by real-time health checks. Your system should know the performance status of a site before routing traffic to it. With Layer 7 health checks, you can see if your services are online before sending responses to DNS queries. This is what makes the routing and redundancy decisions intelligent.
Not all GSLB solutions are alike, and certainly most solutions don’t make GSLB this easy. Here at Snapt, we include GSLB as part of our software Application Delivery Controller (ADC) because it’s essential for making sure your multi-site business always stays online and performs well.
To try Snapt for free, download our trial today.