Using Kong with Kubernetes

Jeff Gensler
6 min readDec 9, 2017

Kong is an open source API Gateway platform that is built on top of NGINX. It uses plugins to enable features in your API gateway layer that would normally have been created at the application layer. In this guide, I’ll show some of the basic commands and configurations as well as detail some of the operational concerns of operating a Kong deployment. Pretty much all of the following commands are copied from the documentation. I have aggregated the few that I thought were useful for developing and publishing a web service.

Getting Kong Deployed

Deployment: via documentation

First, install Kubernetes using Minikube.

$ minikube start --kubernetes-version v1.7.0

Next, download the Kong repo and follow their installation commands located here. You’ll need to comment out some of the volume configuration in the respective datastore code. I opted for Cassandra though it will make no difference in this guide.

After Kong is deployed, you can navigate to the exposed service and explore Kong’s API layer:

$ kubectl get svc kong-admin
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-admin 10.0.0.157 <pending> 8001:31779/TCP 1h
$ curl http://192.168.99.100:31779/apis
{"total":0,"data":[]}

You can use the command line tool only after you set up a config file. I haven’t had any success with the command line so I’ll use the HTTP API for the remainder of the article.

Deployment: via kong-operator

To automate the install of Kong, you can use the kong-operator to quickly get a cluster up an running. I won’t dive into the operator in this article, but it may be useful in your setup.

Creating your first API

First, we will need an application to route to. I’ll use a simple nginx container.

$ kubectl run nginx --image nginx --port 80
$ kubectl expose deployment nginx --port 8080 --target-port 80
$ kubectl get svc nginx
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx 10.0.0.54 <none> 8080/TCP 15s

Then, we can create the API.

$ curl -i -X POST --url http://192.168.99.100:31779/apis/ \
--data 'name=nginx-hello' \
--data 'hosts=nginx.testing' \
--data 'upstream_url=http://10.0.0.54:8080'

Next, route to the API you’ve just created.

$ kubectl get svc kong-proxy
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kong-proxy 10.0.0.85 <pending> 8000:30825/TCP 1h
$ curl -i -X GET \
--url http://192.168.99.100:30825/ \
--header 'Host: nginx.testing'
HTTP/1.1 200 OK
Content-Type: text/html; charset=UTF-8
Content-Length: 612
Connection: keep-alive
Server: nginx/1.13.7
Date: Sat, 09 Dec 2017 18:42:39 GMT
Last-Modified: Tue, 21 Nov 2017 14:28:04 GMT
ETag: "5a1437f4-264"
Accept-Ranges: bytes
X-Kong-Upstream-Latency: 1
X-Kong-Proxy-Latency: 731
Via: kong/0.11.2
...
<h1>Welcome to nginx!</h1>
...

Creating APIs dynamically using Ingress

We can use another open source project to automate the creation of APIs. Check out the Ingress Controller here:

First, lets delete the existing API from Kong

$ curl -i -X DELETE --url http://192.168.99.100:31779/apis/nginx-hello

Next, lets deploy the Kong Ingress controller

$ kubectl create -f https://raw.githubusercontent.com/koli/kong-ingress/master/docs/examples/kong-ingress.yaml

Now, we are ready to create the Ingress Resource

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- host: nginx.testing
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 8080

Which should create the following:

Note that the NGINX upstream (upstream_url) will be set to a DNS name. This is tricky because NGINX will cache DNS results and may not re-resolve DNS when pods are shuffled around. It might be worth investigating depending on your use-case. Check out this NGINX ingress controller or this article for more information.

After, we are able to run the same curl command as before. Interestingly enough, we see the first query take quite a bit of time as NGINX resolves the address for the first time (see X-Kong-Proxy-Latency).

$ curl -vvv -X GET \
--url http://192.168.99.100:30825/ \
--header 'Host: nginx.testing'
* Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 30825 (#0)
> GET / HTTP/1.1
> Host: nginx.testing
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Type: text/html; charset=UTF-8
< Content-Length: 612
< Connection: keep-alive
< Server: nginx/1.13.7
< Date: Sat, 09 Dec 2017 20:13:31 GMT
< Last-Modified: Tue, 21 Nov 2017 14:28:04 GMT
< ETag: "5a1437f4-264"
< Accept-Ranges: bytes
< X-Kong-Upstream-Latency: 3
< X-Kong-Proxy-Latency: 1063
< Via: kong/0.11.2
<
...
<h1>Welcome to nginx!</h1>
...

Halfway there…

At this point, we have seen that Kong and kong-ingress and help us easily create APIs. However, not much is different than a standard Ingress Controller. Let’s explore some of Kong’s features.

Rate Limiting

We might want to expose and API that can’t handle many requests. To solve this, we will need add the Rate Limiting plugin. Following the documentation:

curl -X POST http://192.168.99.100:31779/apis/nginx.testing~default~300030/plugins \
--data "name=rate-limiting" \
--data "config.minute=2"

Because we have set the rate limiter to only accept two connections a minute, we will exceed the rate limit on the third request:

$ curl -vvv -X GET \
--url http://192.168.99.100:30825/ \
--header 'Host: nginx.testing'
* Trying 192.168.99.100...
* Connected to 192.168.99.100 (192.168.99.100) port 30825 (#0)
> GET / HTTP/1.1
> Host: nginx.testing
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 429
< Date: Sat, 09 Dec 2017 20:39:29 GMT
< Content-Type: application/json; charset=utf-8
< Transfer-Encoding: chunked
< Connection: keep-alive
< X-RateLimit-Limit-minute: 2
< X-RateLimit-Remaining-minute: 0
< Server: kong/0.11.2
<
{"message":"API rate limit exceeded"}

Because we aren’t using authentication, this could be used for an exposed API that an external services calls (possibly a webhook).

Authentication

Let’s assume we are hosting our organization’s “Developer API.” We will only want to let requests in that use an API key. To solve this, we will need to add the Key Authentication plugin. Following the documentation:

$ curl -i -X POST http://192.168.99.100:31779/apis/nginx.testing~default~300030/plugins \
--data 'name=key-auth'

You can try the usual curl request from above to test that you get an Unauthorized error.

Now, we have to create a “Consumer.” I’ll use someone named “serviceaccount”

$ curl -X POST \
--url http://192.168.99.100:31779/consumers/ \
--data "username=serviceaccount"
{"created_at":1512852490202,"username":"serviceaccount","id":"80b4d3ef-16d2-449b-b2b6-deb66ba68be4"}

After, we can generate an API key for the “serviceaccount” user.

$ curl -X POST \
192.168.99.100:31779/consumers/serviceaccount/key-auth/ \
--data ''
{"id":"7b1bcd56-360d-4c11-9324-a3e1903be2cd","created_at":1512852626780,"key":"N2mT3MvfdFcB9XGB7n7njupRv9z7c6wn","consumer_id":"f351b860-55d8-446f-893b-43c11e745a6d"}

Finally, we can use the API key in our service request:

$ curl -X GET \
--url http://192.168.99.100:30825/ \
--header 'Host: nginx.testing' \
--header 'apikey: N2mT3MvfdFcB9XGB7n7njupRv9z7c6wn'
...
<h1>Welcome to nginx!</h1>
...

If you are continuing from the section above, you can send the third request and get the rate limiting message.

Closing Thoughts

Circling back to the Rate Limiting plugin, we probably want to add a rate limit on a per developer basis. To do that, we would add the extra config.limitby parameter when configuring the API.

Also, we can see that we would have to run plugin enablement command every time our Ingress object is created. I think this is where either the kong-operator’s ThirdPartyResource or the kong-ingress controller’s annotations would be useful. I don’t think there is an annotation for enabled plugins so that would likely be a great contribution to that project.

In terms of operational issues, you’ll be using Consumers to authenticate users. This means that an existing API with an existing set of API tokens will need to be migrated using the /consumers API. Assuming you have identity stored in an external provider, you’ll also need to keep those two data sources in sync.

Thoughts on building a Kubernetes-Native API-Enabled Monolith

To package a full-fledged application, you would have to bundle:

  • a Kong TPR to create the plugin-configured Kong Gateway
  • a persistent datastore to hold existing API keys
  • the application itself
  • optionally, an AuthenticationProvider2KongConsumer application to bootstrap missing API keys

If you were using Helm, you could use their template language to configure static service accounts (for a testing environment) or persistent backends (like a static AWS’s RDS instance instead of a Postgres Pod) depending on environment.

--

--