Local Development with Lots of Microservices (Part 1)

How to use Envoy and Houston to develop on Kubernetes without running every service on your laptop

TR Jordan
Turbine Labs

--

One of the biggest advantages of microservices is that each service has a codebase small enough for you to fit it all in your head. The biggest disadvantage is that there’s more than one service. Unless you’re running a service that’s particularly simple, you need service-to-service communication. It can be difficult to figure out how to run all the dependencies for a service while quickly iterating on the code. If 10 services slow down your laptop, 100 services will make it unusable. Even worse, loading the entire app with development data can make it feel like you’re trying to recreate production on a single computer!

Generally, people take two approaches to solving this problem:

  • Offload development to big machines and run remotely. Workable test data can be a problem to debug, and not everybody likes developing in a terminal on an EC2 machine.
  • Provide pre-fab staging environments for individuals to reserve. It still takes time to deploy local changes into any environment, and each environment has to be maintained separately.

In both cases, hardware costs money, and depending on your budget, these environments may become expensive or a productivity bottleneck.

One way to fix this is to offload the dependent services to a remote cluster and only run the services that are under active development on your laptop. Done right, this can provide the best of all worlds: quick local iteration, stable dependencies and test data, and only a single shared development environment to maintain.

Let’s look at how to build this!

Calling Remote Services

First, let’s set up our laptop to call out to the remote services. We’ll assume that both the local and remote services are running in Kubernetes, but this approach works for any case. Let’s assume a simple service with three dependencies. High-level, its traffic patterns would look like this:

Without the same service discovery mechanisms that your app probably has in prod, the question is, how do requests from a laptop find the right instances in the staging cluster? We can create this bridge with two Envoy instances: one to catch all local traffic and pass it to the remote cluster, and one in the remote cluster to find the running instances and load balance over them.

To see this in action, check out the local dev example on Github, using just two services in a similar model.

There are three key points this workflow addresses:

  • Envoy configuration
  • Adding new services
  • Keep staging secure (or at least hidden)

Envoy Configuration

While Envoy can do a ton of fancy tricks to make services more stable, we only need its routing capabilities. In the example code, the Envoy at the edge of the cluster is a straightforward front proxy. Because Kubernetes Pods may restart with new IPs at any time, it is configured using Houston and Rotor: Rotor collects the Pod IPs from Kubernetes, passes it to Houston, and Houston generates the routes that Envoy serves. By exposing Envoy as a NodePort, Kubernetes will give it an external IP, so anybody can hit it. (More on keeping this secure below!)

To figure out the route from laptop to cluster, we also run a Rotor that collects the NodePort IP and port of the remote Envoy and uses that to configure the local Envoy. This value gets stored in Houston, and whenever the laptop Envoy comes online, it gets the latest value of the IP for the remote cluster. No DNS needed!

Adding New Services

One of the big problems this request flow solves is that local laptops never need to update their routing to handle new services. The only configuration needed to make this work is that the default outgoing host and port for services should be localhost:80. Megan can add a new Service 5 in the Kubernetes cluster that’s a dependency of Service 1, and when Joe pulls in the latest master to his local copy of Service 1, it’ll automatically send all traffic through the local Envoy and up to Kubernetes, where Service 5 is ready and waiting.

If Joe wants to test a change to Service 5, he can spin it up on his local laptop and update the local configuration, while Services 2 through 4 still run in in the cloud.

Keeping Staging Secure

Another problem with staging is keeping it hidden from the outside world, while still allowing developers to get to it easily. There are a couple approaches to securing the cluster without getting in the way of users:

  • Put everything on a VPN. This is simple, and it even means you can assign a DNS entry to the front proxy, making it easy to do integration tests against the cluster.
  • Use Envoy’s ext_authz to validate requests from users. These could even be automatically added to developers laptops using the request_headers_to_add option on their laptop Envoys, though if your application already relies on ext_authz for authentication, this could get messy.
  • Distribute client certificates to laptops and validate them in Envoy.

In all cases, these can be dynamically handed out to laptops, cleanly separating access to staging from any development concerns.

Trying It Out

If you want to get started, check out the github repo and sign up for a Houston account today!

Coming soon: sending requests from the development cluster to your laptop. If you want to get notified when it’s out, sign up for our email list!

--

--