Load Balancing Kubernetes on DigitalOcean

by Bethany Hendricks on Kubernetes • December 22, 2020
Load Balancing Kubernetes on DigitalOcean

Many of our customers are exploring the use of Kubernetes. Some are looking to use it as part of a multi-cloud strategy to reduce business risk. Others are looking at dividing monolithic apps into microservices and leveraging Kubernetes as a management and orchestration plane for all the separate services created from the decoupled application.

A number of them have come to us in recent weeks asking how to deploy production-grade, high-capacity load balancing on Kubernetes with DigitalOcean. They are looking at DigitalOcean’s Managed Kubernetes service because managing Kubernetes is challenging. The problem they have encountered is that DigitalOcean Managed Kubernetes strongly encourages them to spin-up a DigitalOcean load balancer. While the DigitalOcean load balancer is a good start, it has a few issues:

  • One service per load balancer – this is not economically viable
  • Only 10k RPS per load balancer – this capacity is too low for many high-usage apps
  • Missing security features like WAF – a WAF is commonly offered as a feature of many load balancers like HAProxy or Snapt’s own products
  • No scaling capability in the DigitalOcean load balancers – the only option is to use a load balancer to its full capacity and then spin-up another one, splitting the service traffic
  • Only a basic rules and policy engine for the DigitalOcean load balancer – for more complex applications, SREs and DevOps teams will likely want more granular control on ingress

Create a Free Nova Account

There is a nifty way to deploy Snapt Nova ADCs as load balancers in front of DigitalOcean managed K8S clusters that results in better performance, lower cost, and higher capacity. This step-by-step guide (which we mirrored from our support pages) explains how to use Nova ADCs instead of DigitalOcean load balancers on K8S clusters within DO.

K8S Configuration and NodePort

For this demonstration let’s use the Kubernetes Guestbook application. (You can use any service if you prefer, though). Note below we are deploying a two-node cluster:

digitalocean load balancing kubernetes configuration and nodeport

On that Kubernetes cluster we deployed the Guestbook app. The Guestbook app deploys a service called "frontend". If we describe that service we can see the NodePort that has been allocated (note: you use NotePort instead of LoadBalancer for this demo.).

❯ kubectl describe svc frontend
Name:                     frontend
Namespace:                default
Labels:                   app=guestbook
Annotations:              <none>
Selector:                 app=guestbook,tier=frontend
Type:                     NodePort
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  31971/TCP
Endpoints:      ,,
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

Take note that in reality this service is running on port 31971 - that's what we need to know from this.

Node IPs

Now that we have the NodePort (31971 in our case) we need to know the IP addresses to send traffic to. These are the actual IPs of the droplets in our Kubernetes cluster, not the Endpoints within Kubernetes. Go to Droplets and you can see them, as shown below:

digitalocean load balancing kubernetes node ip

We see the IPs and Ensure your firewall setup at DO allows it, and the connect to your NodePort from above (31971 for us) on those ports to verify:

❯ curl
<html ng-app="redis">
    <link rel="stylesheet" ref="//netdna.bootstrapcdn.com/bootstrap/3.1.1/css/bootstrap.min.css">
    <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.2.12/angular.min.js"></script>
    <script src="controllers.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/angular-ui-bootstrap/0.13.0/
</head> <body ng-controller="RedisCtrl"> <div style="width: 50%; margin-left: 20px"> <h2>Guestbook</h2> <form> <fieldset> <input ng-model="msg" placeholder="Messages" class="form-control" type="text" name="input">
<br> <button type="button" class="btn btn-primary" ng-click="controller.onRedis()">Submit</button> </fieldset> </form> <div> <div ng-repeat="msg in messages track by $index">
{{ msg }} </div> </div> </div> </body> </html>
That means we can load balance to this service from Nova Nodes in DigitalOcean.

Deploying Nova

You have multiple options for how to deploy Nova into DigitalOcean. We recommend adding DigitalOcean as a Connected Cloud and deploying directly in, either a fixed number of droplets, or an AutoScaler which will automatically provision however many droplets are needed.

If for some reason you need a custom install, you can also run Nova on any stock Ubuntu system, so just launch your own Ubuntu droplets.

You can follow the cloud guide here or the manual install guide here.

digitalocean load balancing kubernetes deploy snapt nova

You need at least 1 Nova droplet deployed into the environment to eventually load the ADC on to. This is a standard droplet(s) outside of K8S.

Configuring Nova Backend

There are two things to configure on Nova - a backend, and an ADC.

For the Backend you have options as well. The backend is the method used to define where we send the traffic. In this case your K8S Node IPs and NodePorts.

You can use either a Simple Backend where you specify the two IPs and Ports like so (remember to use your IPs and Ports discovered above):
The simple backend looks like this on Nova:

digitalocean load balancing kubernetes snapt nova backend


Or, you can use DigitalOcean's tags to do it. Add a Cloud API backend and enter port 31971 (in our case) and choose the tag "k8s" if you only have one managed Kubernetes installation.

The Cloud API backend looks like this on Nova:

digitalocean load balancing kubernetes snapt nova backend

Configuring Nova ADC

Now that we have the backend the ADC part is easy! Add an ADC type, typically HTTP or SSL Termination and set it to run on port 80 or port 443 (and so on).

Under the Backends section you set it to send traffic to the Backend you just added, your Kubernetes service. See below: 

digitalocean load balancing kubernetes attach an ADC

Then configure any other options you want, and save. At this point you will attach it to your new Droplet(s) in DigitalOcean and you'll be online!

digitalocean load balancing kubernetes snapt nova adc


Please contact us if you need any assistance with the deployment.


  1. You can define a static NodePort so this behavior is more predictable in your Kubernetes services.
  2. You can also manually publish any local ingress services on Kubernetes and use this functionality with it.