How Geographic Distance Affects Latency

5 min read time
How Geographic Distance Affects Latency

We often speak of the effect latency has on any network connection and an end user's experience of any application over the internet. But what exactly is latency and why is it such an important factor to consider when planning application infrastructure, including cloud deployment and load balancing?

Despite incredible development in computer networking technology, high-speed connections, and the growth of cloud computing (which is taking data centers to more locations than ever before), latency is still a big concern. Businesses need to plan carefully to mitigate latency and achieve better results for their customers. 

This is especially important when responsiveness is such an important criterion in so many markets such as online gaming, streaming, or online retail. In all these areas, users expect a smooth, consistent, and lag-free experience. 

Factors Affecting Internet Speed

Apart from latency, there are many things that can have an effect on a user’s internet speed at any given time when interacting with cloud infrastructure, including:

  • Bandwidth of your internet connection
  • Number of other users and/or applications using the bandwidth on your local network
  • Contention ratio
  • Distance from your local exchange
  • Throttling/traffic shaping by your ISP
  • Load on the server you are connecting to
  • Server and storage performance in the cloud infrastructure you are connecting to (for example, traditional HDD vs SSD).

These are all important factors that have a big impact on the online and cloud experience. Organizations hosting websites and applications can improve their performance by provisioning more (and faster) server capacity and adding load balancing and health checking, but hosts have little control over things like ISP shaping, contention ratios, and end-user connectivity. 

The best way to solve the online speed experience for the end-user though is to reduce the latency they experience - and the good news is that this is something hosts can improve on their own.

Bandwidth vs. Latency

First, we will start off by clearing up a common misunderstanding about what is bandwidth and what is latency. The reason why I want to do this is that people often assume that getting a faster internet connection into their home or business will solve many of their online challenges.

However, the reality is that latency might have a bigger effect on the end-user experience (or the connectivity between two systems) than just pure available bandwidth. 

Bandwidth

Bandwidth is the capacity of data transfer per second. A good way of thinking about it is to imagine it as a  highway. A six-lane highway allows more cars to go past a certain point per second than a four-lane highway.

Similarly, a 1Gbps connection can transfer more data per second than a 100Mbps connection. 

Latency

Latency is the time it takes for a packet of data to move from its origin to its destination. If we keep with the highway analogy, latency is equivalent to the journey time to drive from point A to point B.

On a highway, the journey time will be affected by the distance traveled and by slow-downs on the way, such as toll booths and intersections. The greater the distance, the more toll booths and intersections a driver will likely encounter, compounding the factors that affect the journey time.

It’s the same with latency. 

What Causes High Latency?

Latency increases proportionally to the distance between the end-user and server they connect to. 

When a user tells their browser to visit a website, their device sends a request to the destination server and receives a response at very high speeds – even at the speed of light across a fiber-optic network. The data travels through a route of “gateway nodes” or more simply: “hops”. And while light-speed is incredibly fast, with each “hop” there is an inherent processing delay as the router reads the data, interprets it, and then sends it on to the next location.

Data is able to travel efficiently only over short distances, so longer distances are traversed with additional hops, where traffic is routed and signals amplified. The greater the distance data needs to travel, the more hops it needs to make to reach its destination. Every hop introduces extra latency.

The greater the distance, the greater the latency. Yes – networking efficiency, ISP routing, and the quality of the routing devices do all play a part. But distance is the most significant factor. 

Just How Much Does Latency Affect Your Speed?

Well, let's consider the perfect example of having a high-speed fiber-optic network to every point in the globe and a speed of light that is approximately 200,000 km/second. 

At that insanely fast speed, it will take around 150ms to reach from the US West Coast to central Asia. This theoretical scenario assumes that every point on the internet is a fiber-optic network, which we know is not the case for every location in the world, and ignores the impact of routing devices and the numerous hops that traffic will need to make along the way.  In reality, such a trip across the globe would take several seconds to complete.

However, even if we assume the fastest scenario of 150ms to connect to a website across the globe, a user will start to notice the impact. As data needs to move both there and back, it will mean around a 300ms delay in data getting back to the user. 

300ms might not sound like a noticeable delay when surfing a website if we think of it as a one-off delay. But we are not sending data only once. The reality is that there is a constant stream of messages going back and forth to provide relevant content and to ensure the reliability and security of the network. That 300ms delay affects every one of those communications.

Now, increase the complexity of that data to something like an online multiplayer video game, which requires large amounts of data sent back and forth constantly between a host (with its own latency) and several peers (with their own latencies), and that 300ms will become infuriating as every button press or movement of the mouse will effectively occur 150ms after you press it and you will only notice the effects another 150 later. Throw in users around the globe all with a similar delay and the game is all of a sudden unplayable as what the players see on screen does not match their inputs or even necessarily the game actions calculated on the host server (such as whether a shot hit its target). 

And while I am talking about the extremes of online gaming I’m also talking about the perfect example of a high-speed network around the world that isn’t impacted by router processing. Consider that this delay is actually several seconds, even clicking on a website can become a frustrating experience. 

Building a Global or Hyper-Local Network

Like my above example shows, even if we drastically improve our networking infrastructure and routing technology, latency will still have an impact on high-speed processing. The best way to solve it is to have websites and data duplicated around the globe across many different cloud servers, to be as close as possible to every end-user. 

And because routing technology and each network jump play such a vital role in the overall impact of latency, it's not just about getting data centers to every continent or country, but to every city on the planet to create a “high-performing” experience for everyone. 

While not every business needs to serve a global market, this still applies to more localized markets where a local business might want to provide a responsive experience to every user in their country or state. 

Getting data centers in every city of the globe is still a far off dream for even the biggest cloud providers, but we do already have servers in most cities around the world and global companies can make use of different cloud service providers to tap into these remote edge locations and provide a great experience for their clients.

How To Manage A Distributed Application

Managing a multi-cloud network or edge compute network can be a significant challenge, with the need to maintain security, visibility, and control over distributed systems with different standards. 

However, a modern load balancing and application security platform, such as Snapt Nova, can address these challenges with global server load balancing (GSLB), multi-cloud integration, cloud-native and hybrid compatibility, and centralized intelligence for automatic scaling, security policy, and observability.

Businesses using Nova can build massive global or hyper-local edge networks simply and cost-effectively, to achieve the lowest latency possible for their users.

Conclusion

Latency will continue to be a challenge for a long time and will only get more crucial as more parts of the world rely on digital interactions for their daily life. 

If we work on optimizing our applications for a distributed global and local presence, businesses can create a better experience for everyone.

A Modern Approach to Cloud Gaming: 5 Practical Ways to Reduce Game Latency and Increase Scale

Cloud gaming holds much promise and is attractive to gamers who want convenience, low cost, access to a large library of games, and the ability to play on any device.

However, cloud gaming faces significant challenges, the most important being latency, which many gamers see as a deal-breaker.

snapt-white-paper-the-challenge-of-cloud-gaming-thumbnail

Subscribe via Email

Get daily blog updates straight to your email inbox.

You have successfully been subscribed!