-
Notifications
You must be signed in to change notification settings - Fork 54
Deployment installation Kubernetes
This guide describes how eCamp v3 can be deployed on a Kubernetes cluster using helm.
Some kubernetes cluster running kubernetes >= 1.19-0
Locally: an up-to-date version of kubectl, helm 3 and Git
For our instance https://dev.ecamp3.ch and the feature branch deployments, we created a kubernetes cluster on digitalocean.com, with the second-to-cheapest settings (two nodes instead of one). Once the cluster runs out of CPU or Memory, we simply increase it at digitalocean (adding more nodes for example).
Wherever you are hosting your kubernetes cluster, there should be some information on how to connect to it. On digitalocean, we can download the "config file" and place it in ~/.kube/config
. Or if that file already exists and is in use, one can also save it in e.g. ~/.kube/ecamp3.yaml
and execute export KUBECONFIG=~/.kube/ecamp3.yaml
once in the terminal before using kubectl or helm.
You can test whether the connection to the cluster works using kubectl cluster-info
or kubectl get nodes
.
Probably, your hosting provider will give you access to a kubernetes dashboard. This is a graphical user interface for monitoring and managing the state of everything that is deployed on your cluster. The same information can be retrieved on the command line, using the kubectl
tool. E.g. the pods in the default namespace can be listed using kubectl get pods
. You can also use the helm
tool to view information on all helm-managed releases, e.g. helm list
shows all releases (in the default namespace).
We use Cloudflare for SSL termination, but you can set up cert-manager to use a let's encrypt certificate.
In order to expose container ports on the internet, we need a so-called ingress controller in our cluster. The ingress controller will constantly monitor the containers (and their ingress definitions) in the cluster, and act as a reverse proxy to distribute network requests to the correct containers. For our purposes, an nginx ingress controller works fine. If you haven't set one up in the previous section follow the documentation. We used a one-click deployment from digitalocean to set this up for us.
Note: To upgrade the ingress-nginx installed in the cluster, either remove it and re-install it, or use the following command:
kubectl set image deployment/ingress-nginx-controller -n ingress-nginx controller=k8s.gcr.io/ingress-nginx/controller:v1.0.5@sha256:55a1fcda5b7657c372515fe402c3e39ad93aa59f6e4378e82acd99912fe6028d
cat << 'EOF' | tee -a nginx-values.yaml
controller:
metrics:
enabled: true
podAnnotations:
"prometheus.io/scrape": "true"
"prometheus.io/port": "10254"
config:
log-format-escape-json: true
log-format-upstream: {"timestamp":"$time_iso8601","requestID":"$req_id","proxyUpstreamName":"$proxy_upstream_name","proxyAlternativeUpstreamName":"$proxy_alternative_upstream_name","upstreamStatus":$upstream_status,"upstreamAddr":"$upstream_addr","httpRequest":{"requestMethod":"$request_method","requestUrl":"$host$request_uri","status":$status,"requestSize":"$request_length","responseSize":"$upstream_response_length","userAgent":"$http_user_agent","remoteIp":"$remote_addr","referer":"$http_referer","request_time_seconds":$upstream_response_time,"protocol":"$server_protocol"}}
EOF
helm upgrade --install ingress-nginx ingress-nginx \
--namespace ingress-nginx \
--create-namespace \
--repo https://kubernetes.github.io/ingress-nginx \
--values nginx-values.yaml
rm nginx-values.yaml
You can either set up the DNS records for your deployment manually, or you can install external-dns in the cluster, which has the ability to talk to many DNS providers via their APIs and automatically set up the necessary DNS records for the ingresses defined in the cluster. E.g. we use Cloudflare for DNS, so we installed external-dns as follows:
helm repo add bitnami https://charts.bitnami.com/bitnami
helm upgrade --install \
-n external-dns --create-namespace \
--set policy=sync \
--set provider=cloudflare \
--set-file cloudflare.apiToken=<file containing the API token for cloudflare> \
--set txtOwnerId=<scope for dns entries created by helm chart> \
ecamp3-external-dns bitnami/external-dns
If you want to use multiple clusters on the same Domain, you need to have a txtOwnerId per cluster. And its anyway good if bitnami/external-dns only deletes the entries it created.
We use the 'Kubernetes Monitoring Stack' one-click app provided by DigitalOcean for monitoring the resource usage of the pods on the cluster. For installation instructions see Kubernetes Monitoring Stack.
To access the Grafana dashboards, you can forward the HTTP port to your localhost:
kubectl port-forward svc/kube-prometheus-stack-grafana 9000:80 -n kube-prometheus-stack
Then, access Grafana at localhost:9000 (yes, really at localhost!). Login is admin
/ prom-operator
. Note: All our application pods are in the default
namespace, the other namespaces are for system services.
Run the following commands to generate a public/private key pair for signing JWT tokens. For optimal compatibility, these commands can also be run inside the php container: docker-compose run --entrypoint sh -v ${PWD}/:/app php
read -p 'Enter a passphrase that should be used for the key pair:' jwt_passphrase
echo -n "$jwt_passphrase" > jwt-passphrase.txt
echo "$jwt_passphrase" | openssl genpkey -out private.pem -pass stdin -aes256 -algorithm rsa -pkeyopt rsa_keygen_bits:4096
echo "$jwt_passphrase" | openssl pkey -in private.pem -passin stdin -out public.pem -pubout
The key pair is now stored in the files public.pem
and private.pem
, and the passphrase in jwt-passphrase.txt
. You will need these three values in these files for the deployment, as described below.
If you don't want the helm chart to automatically create your database, you need to provide one. With postgres v15 we came across some problems on DigitalOcean. To allow the specified user to migrate the schema, you need to grant the following permissions.
GRANT ALL PRIVILEGES ON DATABASE "$DATABASE" TO "$USER";
GRANT ALL PRIVILEGES ON SCHEMA public TO "$USER";
Connect with the user doadmin to your cluster to set the privileges. Make sure you are connected to $DATABASE when granting the privileges on SCHEMA public.
The small databases of digital ocean we use don't have enough connections to run 4 api instances at once. Thus we needed to use "Connection Pools" to manage the available connections for the many api instances (and their workers). You first need to create a connection pool in the digital ocean ui. We used a pool size of 11 and the pool mode of "Transaction". We also set the "User privileges override" setting. Then you need to use connection pools connection details instead of the connection details of the db directly.
ecamp3 comes with a helm chart, which can be thought of like a package that is installed using apt
or npm
or composer
, but for installing software on a kubernetes cluster. The helm chart is based on the one coming with API platform, but was extended to include all the services that ecamp3 consists of.
First, you will have to get the chart to your computer. Since at this time we don't publish the chart to any helm repository, you will have to clone the GitHub repository which includes the chart, and go to the chart:
git clone https://github.com/ecamp/ecamp3.git
cd ecamp3/.helm/ecamp3
From there, perform the following steps to deploy ecamp3 on your cluster. The same command also works for upgrading an existing instance.
💡 If you are unsure, you can add the
--dry-run
and--debug
arguments to test your deployment first
helm dependency update .
helm upgrade --install \
--set imageTag=latest \
--set sharedCookieDomain=.mydomain.com \
--set api.domain=api-ecamp3.mydomain.com \
--set frontend.domain=ecamp3.mydomain.com \
--set print.domain=print-ecamp3.mydomain.com \
--set mail.domain=mail-ecamp3.mydomain.com \
--set postgresql.dropDBOnUninstall=false \
--set php.dataMigrationsDir=dev-data \
--set-file php.jwt.publicKey=public.pem \
--set-file php.jwt.privateKey=private.pem \
--set-file php.jwt.passphrase=jwt-passphrase.txt \
--set deploymentTime=$(date -u +%s) \
--set deployedVersion=$(git rev-parse --short HEAD) \
--set recaptcha.siteKey=disabled \
ecamp3 .
To re-deploy the same instance with just some adaptations to some --set
values, you can also run e.g. helm upgrade --reuse-values --set deployedVersion=1.2.1 ecamp3 .
To deploy another copy of ecamp3, just change the deployment name (ecamp3
on the last line).
There are lots of other configuration values you can change. For a full list, refer to values.yaml.
The image tag you selected with --set imageTag
is not available on Docker Hub. To manually build and push the docker images:
docker build --target api_platform_php -t ecamp/ecamp3-api:$(git rev-parse HEAD) ../../api
docker push ecamp/ecamp3-api:$(git rev-parse HEAD)
docker build --target api_platform_caddy_prod -t ecamp/ecamp3-caddy:$(git rev-parse HEAD) ../../api
docker push ecamp/ecamp3-caddy:$(git rev-parse HEAD)
docker build -t ecamp/ecamp3-frontend:$(git rev-parse HEAD) --file ../../.docker-hub/frontend/Dockerfile ../..
docker push ecamp/ecamp3-frontend:$(git rev-parse HEAD)
docker build -t ecamp/ecamp3-print:$(git rev-parse HEAD) --file ../../.docker-hub/print/Dockerfile ../..
docker push ecamp/ecamp3-print:$(git rev-parse HEAD)
And then run the helm upgrade
command again.
kubectl rollout restart deployment ecamp3-dev-frontend
helm delete ecamp3-pr1234
See the dedicated documentation page on this topic.
We have a GitHub Action that deploys our environments for us. But if we ever need to manually deploy, this is (more or less) the command we use:
helm upgrade --install \
--set imageTag=$(git rev-parse HEAD) \
--set sharedCookieDomain=.ecamp3.ch \
--set api.domain=api-custom.ecamp3.ch \
--set frontend.domain=custom.ecamp3.ch \
--set print.domain=print-custom.ecamp3.ch \
--set mail.domain=mail-custom.ecamp3.ch \
--set postgresql.enabled=false \
--set-file postgresql.url=postgres-url \
--set-file postgresql.adminUrl=postgres-admin-url \
--set postgresql.dropDBOnUninstall=false \
--set php.dataMigrationsDir=dev-data \
--set-file php.jwt.passphrase=jwt-passphrase \
--set-file php.jwt.publicKey=public.pem \
--set-file php.jwt.privateKey=private.pem \
--set-file frontend.sentryDsn=frontend-sentry-dsn \
--set-file print.sentryDsn=print-sentry-dsn \
--set deploymentTime=$(date -u +%s) \
--set deployedVersion=$(git rev-parse --short HEAD) \
--set recaptcha.siteKey=disabled \
ecamp3-mycustomrelease .
(Secret values aren't included here, but instead are placed in files like jwt-passphrase etc. and read using --set-file for this example.)
- Home
- Installation
- Domain Object Model
- API
- Design
- Localization and translations
- Architecture
-
Testing guide
- API testing (TBD)
- Frontend testing
- E2E testing (TBD)
- Deployment
- Debugging