In a project I was working on recently I was tasked with implementing local testing of a kubernetes setup the client was working on. I ended up using kind for this and it worked out nicely. Another tool that I have been meaning to try is kubenix. In this post I will give a short overview on a couple of topics:
- Nixifying a small nodejs service
- Creating a docker image the nix way
- Using kind to easily boot up a k8s cluster
- Describing k8s deployments with kubenix
Note that what I am presenting is for motivational purposes and you should certainly put more thought into your setup if you want to take this approach to production.
In order to deploy something to kubernetes we first need some service. The service itself is mostly irrelevant for our purposes so we just write a little express based JavaScript app that returns "Hello World" on a port that can be configured via the environment variable APP_PORT
:
#!/usr/bin/env node
const express = require('express');
const app = express();
const port = process.env.APP_PORT ? process.env.APP_PORT : 3000;
app.get('/', (req, res) => res.send('Hello World'));
app.listen(port, () => console.log(`Listening on port ${port}`));
Granted we could just deploy some random public docker image but hey, where would be the fun in that :)
In order to nixify our little hello-app we are going to use yarn2nix which makes everything really for us:
pkgs.yarn2nix.mkYarnPackage {
name = "hello-app";
src = ./.;
packageJson = ./package.json;
yarnLock = ./yarn.lock;
}
We just have to make sure that we add "bin": "index.js"
to our package.json
and mkYarnPackage
will put
index.js
in the bin
path of our output. Since we added #!/usr/bin/env node
to index.js
, node will also be
added to closure of our app derivation.
Next we want to create a docker image of our app using dockerTools.buildLayeredImage
:
pkgs.dockerTools.buildLayeredImage {
name = "hello-app";
tag = "latest";
config.Cmd = [ "${helloApp}/bin/hello-app" ];
}
${helloApp}
is of course the derivation we created above using mkYarnPackage
. Easy as pie.
kind is a portable (linux, osx and windows) solution to running kubernetes clusters locally, in a docker container. The project is still young but it is getting a lot of support and works very well already:
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.13.3) 🖼
✓ [control-plane] Creating node container 📦
✓ [control-plane] Fixing mounts 🗻
✓ [control-plane] Configuring proxy 🐋
✓ [control-plane] Starting systemd 🖥
✓ [control-plane] Waiting for docker to be ready 🐋
✓ [control-plane] Pre-loading images 🐋
✓ [control-plane] Creating the kubeadm config file ⛵
✓ [control-plane] Starting Kubernetes (this may take a minute) ☸
Cluster creation complete. You can now use the cluster with:
export KUBECONFIG="$(kind get kubeconfig-path --name="kind")"
kubectl cluster-info
All it takes is kind create cluster
and setting the correct KUBECONFIG environment variable and we can interact with the cluster via kubectl
.
The kubenix parses a kubernetes configuration in Nix and validates it against the official swagger specification of the designated kubernetes version. Apart from getting a compile-time validation for free, writing kubernetes configurations in Nix allows for much better abstraction and less redundancy which otherwise creeps in all to easy.
For the most part the configuration.nix is analogous to what would otherwise be written in YAML or JSON. Yet configuration.nix
actually defines a function and introduces a small let binding:
{ type ? "dev" }:
let
kubeVersion = "1.11";
helloApp = rec {
label = "hello";
port = 3000;
cpu = if type == "dev" then "100m" else "1000m";
imagePolicy = if type == "dev" then "Never" else "IfNotPresent";
env = [{ name = "APP_PORT"; value = "${toString port}"; }];
};
in
{
kubernetes.version = kubeVersion;
# ...
}
Function takes a type
argument which is used for augmenting the requested resources of the deployment. Obviously this is just a motivating example. It would also be possible to split bigger configurations into production.nix
and development.nix
which both import settings from generic.nix
. The best solution is the one that works best for your setup and requirements. The very fact that there are now different options to pick from is an advantage over being restricted to a bunch of YAML files. Creating a json output which can be fed into kubectl
can be created using kubenix.buildResources
:
buildConfig = t: kubenix.buildResources { configuration = import ./configuration.nix { type = t; }; };
kubenix gives us a validated k8s configuration (try to add some nonesense and you will see that it will actually yell at you) and with kind we can pull up a k8s cluster without any effort. Time to actually apply the configuration. deploy-to-kind does just that.
One thing to worth mentioning about this: The docerized hello
service is a docker archive, a local .tar.gz archive. When kubernetes is asked to apply a hello-app:latest
image it will try to fetch it from somewhere. To avoid that from happening we have to do two things:
- Tell kubernetes to never pull the image: configuration.nix
- Make the image available using
kind load image-archive
: nix/deploy-to-kind.nix
With that in place the deployment will work just fine.
The default.nix
of the project exposes the following attributes:
app
: The nodejs service. It can be build vianix-build -A
.deploy-to-kind
: A script that starts a kind cluster and deploysconfiguration.nix
.test-deployment
: A script that implements some very simplistic smoke test to check if our app is up and working.deploy-and-test
: Running this shell vianix-shell -a deploy-and-test default.nix
will deploy, wait for the deployment and finally test it.shell
: Started vianix-shell
this shell provides all required inputs for manually deploying and testing.
Notes:
- The version of
kind
used in this project is built from the master revision at the time of writing. The latest release doesn't include thekind load
functionality. - kubenix currently doesn't have any documentation but a major overhaul with great features is in the works. Follow kubenix refactoring for details.
- I used wait-for-deployment - a nice little bash script - to wait for the completion of the deployment.