Modeling Environments with Linkerd Ingress (Part 1/3)

Overview, Setup, and a simple Use Case

Jeff Gensler
5 min readApr 10, 2017

Overview

First, we must understand what an environment is and why it is helpful for someone building a product.

In software deployment, an environment or tier is a computer system in which a computer program or software component is deployed and executed.
- Wikpedia

Typically, there are multiple pieces of software that are owned (built, deployed, versioned) by independently functioning teams. Almost all of the time, these pieces of software integrate with one another to provide a comprehensive solution to a customers need (like buying a pair of socks online or watching a movie online).

The hardest part of this process is the integration. My favorite way to think about this concept is the term “contract.” Your service makes a contract to an integrating team that says “Hey, if you pass me this JSON, I’ll send you X if it looked like Q and Y if it looked like R… Oh, and I’ll send you F if there was an error on our end.” As long as both side understand and respect this contract, everyone can build a successful application.

Our Environment

The Calculator Service

We will be using a sample Calculator application I have written that integrates a few services. To use the service, you send the gateway and equation as a string. When the gateway receives this string, it will tokenize the string into operations and execute those operations. At the moment, the gateway is the only service that integrates with other applications. Let’s assume that each services is owned by a individual team.

Our Organization

Our organization figured out that production was not the place to test changes so we’ve created a static environment to test our changes. We have named this environment Pre-Production because it is used for all sorts of things! All that matters is that it isn’t production.

We’ve been reading Twitter lately, so it seemed like a good idea to build a Kubernetes cluster for each environment.

Demo: Two Environment, each with Calculator Service integrated with KubeDNS

- name: GATEWAY_TOKENIZERSERVICE
value: http://tokenizer.team-tokenizer:80/tokenize
- name: GATEWAY_ADDITIONSERVICE
value: http://addition-operator.team-addition-operator:80/operate
- name: GATEWAY_SUBTRACTIONSERVICE
value: http://subtraction-operator.team-subtraction-operator:80/operate
- name: GATEWAY_MULTIPLICATIONSERVICE
value: http://multiplication-operator.team-multiplication-operator:80/operate
- name: GATEWAY_DIVISIONSERVICE
value: http://division-operator.team-division-operator:80/operate

Demo: Two Environments, each with the Calculator Service integrated with Linkerd

The following are the environment variables that the gateway service uses to communicate with the other services. Note that the exact same environment variables would be used in production as we would use in preproduction.

env:
- name: HTTP_PROXY
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: GATEWAY_TOKENIZERSERVICE
value: http://tokenizer/tokenize
- name: GATEWAY_ADDITIONSERVICE
value: http://addition-operator/operate
- name: GATEWAY_SUBTRACTIONSERVICE
value: http://subtraction-operator/operate
- name: GATEWAY_MULTIPLICATIONSERVICE
value: http://multiplication-operator/operate
- name: GATEWAY_DIVISIONSERVICE
value: http://division-operator/operate

Here is what an Ingress Object looks like (Team-Multiplication-Operator)

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "multiplication-operator"
namespace: "team-multiplication-operator"
annotations:
kubernetes.io/ingress.class: "linkerd"
labels:
app: "multiplication-operator"
environment: "preprod"
spec:
rules:
- host: "multiplication-operator"
http:
paths:
- backend:
serviceName: "multiplication-operator"
servicePort: "my-http"
Linkerd Graph for the below ab. Tokenizer and Subtraction are called fewer times than the other operators.
# 1 + 6 + 7 * 2 / 1 * 99 - 72 / 100
ab -H "Host: gateway" -c 20 -n 500 172.17.4.3:1080/compute?equation=1%2B6%2B7%2A2%2F1%2A99%2D72%2F100

Use Case: Team-Multiplication-Operator wants to canary a new build in Pre-Production

Team-Multiplication-Operator has found out that the update to their algorithm is ready for production. Instead of using multiple Kubernetes Deployments with the same label selector (link), you figure your routing layer is the best place to make decisions on which instances to talk to. After all, there could still be bugs in their code that make it unfit for production use.

Using the template engine from before, we can easily create a new Ingress resource based on our new build.

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "multiplication-operator-build-123"
namespace: "team-multiplication-operator"
annotations:
kubernetes.io/ingress.class: "linkerd"
labels:
app: "multiplication-operator-build-123"
environment: "preprod"
spec:
rules:
- host: "multiplication-operator-build-123"
http:
paths:
- backend:
serviceName: "multiplication-operator-build-123"
servicePort: "my-http"

We will also need an updated dtab configuration file to help us out with splitting the traffic.

/svc => /#/io.l5d.k8s ;
/split => /#/io.l5d.k8s ;
/svc/team-multiplication-operator/my-http/multiplication-operator => 1 * /split/team-multiplication-operator/my-http/multiplication-operator-build-123 & 8 * /split/team-multiplication-operator/my-http/multiplication-operator ;

While a bit overwhelming, we are simply splitting a host into a host people never knew existed. To see the split, we can use the UI to see how a request to the “multiplication-operator” service will route to the two Ingress Objects and corresponding Service/Pods.

See: https://linkerd.io/in-depth/routing/

Re-run the ab command from about and observe the graphs. You might need to browse to the Linkerd instance that the gateway instance is scheduled on.

From Static to Dynamic

DOES LINKERD SUPPORT DYNAMIC CONFIG RELOADING?

No. We prefer to avoid this pattern and to offload mutable things to separate services. For example, linkerd talks to service discovery for changes in deployed instances, and to namerd for changes in routing policy.
- link

… but …

$ namerctl dtab update web - <<EOF
/srv => /io.l5d.fs ;
/srv => /io.l5d.serversets/path/to/services ;
/host => /srv ;
/http/1.1/* => /host ;
/host/users => 1 * /srv/users-v2 & 99 * /srv/users ;
EOF
# link

No matter! In the next article we will explore how to provide dtabs as a service for developers (or in a pipeline).

Wrapping Up

We will start using the other cluster in the follow blog post. My goal is to use Calico or Felix to show how teams can’t call Pre-Production services from Production.

Check out the repository here:

--

--