gRPC Load balancing using Envoy

Saurav Kumar
4 min readJan 28, 2023

--

Introduction

gRPC has gained a lot of attraction in recent years because of its ease to use and cross-platform support which was absent in RPC.

Doubtful Woody

But why is Woody not looking convinced here? Well, the reason is — complications involved in load balancing gRPC traffic.

See gRPC is fantastic, with high throughput (about 7X faster than REST), less computation, bidirectional, etc(not discussed here, OUT_OF_SCOPE_ERROR). Most of these fantastic features are because gRPC uses HTTP/2 protocol that provides multiplexing where the same connection is reused for multiple requests as long as the connection persists. And this is our problem.

Traditional REST APIs are easy to load balance. Since every request opens a new connection to the server which is being proxied by a load-balancers server such as Nginx and ALB. This is called L4 load balancing where requests are managed at the network protocol layer (TCP/UDP).

But why not use L4 load balancing?

Well, not such a big issue, just one of your production servers crashing because of a heavy load 💀 and then another server (sarcasm alert). And that’s because when a client gets a connection to one of the servers, it will hold on to it and keep sending packets to it. While other servers are sitting and chilling one server is literally working for others too.

The Solution?

Since gRPC likes to have a persistent connection and we have to allow that, instead of load balancing at connection level(TCP/UDP) we can load balance packets that are being sent from the client to servers. And this is called L7 (application) load balancing.

These problems of load balancing are also solved in other ways such as

  • Client-side load balancing
  • Look-aside
  • reconnect periodically.

In this article, we will deal with one of the load balancer servers which support L7 and that is Envoy.

Envoy

A proxy server created by Lyft using C++ as a language to achieve high performance. It is a modern alternative to Nginx and HAProxy.

Practical time — K8s setup

Why k8s? Because most microservices are running on Kubernetes these days so it makes sense.

K8s components:

  • GRPC server pod
  • Envoy pod
  • application headless service (to discover GRPC application)
  • Proxy headless service (to discover Envoy proxy)
  • A client ( GRPC client to test)
HLD

We are using headless service because by default k8s service act as proxy and load balance the request between the pods. But right now, we just want it to act as DNS and return IP address of the associated pod.

Connection steps:

  1. GRPC client looks will resolve the domain name of the envoy headless service ( envoy-hsvc ).
  2. It will make a connection to the envoy pod. This connection will remain persistent.
  3. Envoy pod will resolve the upstream domain name of the GRPC server using a headless service ( grpc-server ).
  4. Envoy pod will create a persistent connection to one of the GRPC pods (say Pod1)
  5. The first request packet will be sent to Pod1
  6. The next request packet will be sent to Pod2 after the envoy creates a connection to it.

We can also introduce one more envoy pod for High availablity. In this case GRPC client will only establish connection of one of them, and incase that envoy pod is down, it will again resolve the DNS and pick another one.

envoy k8s deployment object
envoy headless service object (used by the client to discover envoy pod)

Now we certainly need rules/config for our envoy server. So we have created a config-map for that.

Envoy config-map

The above is the complete setup of the envoy server

  • With upstream set to grpc-server:8000 .
  • And it is listening on port 80.

Server and client

server listening on 8000

.
.
server.bindAsync('0.0.0.0:8000', grpc.ServerCredentials.createInsecure(), () => {
routeServer.start();
});
.
.

grpc-server (used by envoy to discover GRPC server)

grpc server headless service

Client connecting to port 80

new Stub.StubClass('envoy-hsvc:80', grpc.credentials.createInsecure());

NOTE: Here we are using headless service as domain but port is of envoy (and not of service) because we only want IP of our envoy pod from service and client will connect directly to it

And that’s the wrap. Ping me on LinkedIn or comment here for any doubts.

Hope you enjoyed and learned and more importantly, our beloved Woody is convinced now. ADIOS!

https://www.linkedin.com/in/svkrclg/

https://github.com/svkrclg

--

--