Skip to content
This repository has been archived by the owner on Jun 29, 2022. It is now read-only.

Releases: kinvolk/lokomotive

v0.2.0

19 Jun 13:04
v0.2.0
Compare
Choose a tag to compare

We're happy to announce Lokomotive v0.2.0 (Bernina Express).

This release includes a ton of new features, changes and bugfixes.
Here are some highlights:

  • Kubernetes v1.18.3.
  • Many component updates.
  • AKS platform support.
  • Cloudflare DNS support.
  • Monitoring dashboards fixes.
  • Dynamic provisioning of Persistent Volumes on AWS.
  • Security improvements.

Check the full list of changes for more details.

Upgrading from v0.1.0

Prerequisites

All platforms

  • The Calico component has a new CRD that needs to be applied manually.

    kubectl apply -f https://raw.githubusercontent.com/kinvolk/lokomotive/v0.2.0/assets/lokomotive-kubernetes/bootkube/resources/charts/calico/crds/kubecontrollersconfigurations.yaml
    
  • Some component objects changed apiVersion so they need to be labeled and annotated manually to be able to upgrade them.

    • Dex

      kubectl -n dex label ingress dex app.kubernetes.io/managed-by=Helm
      kubectl -n dex annotate ingress dex meta.helm.sh/release-name=dex
      kubectl -n dex annotate ingress dex meta.helm.sh/release-namespace=dex
      
    • Gangway

      kubectl -n gangway label ingress gangway app.kubernetes.io/managed-by=Helm
      kubectl -n gangway annotate ingress gangway meta.helm.sh/release-name=gangway
      kubectl -n gangway annotate ingress gangway meta.helm.sh/release-namespace=gangway
      
    • Metrics Server

      kubectl -n kube-system label rolebinding metrics-server-auth-reader app.kubernetes.io/managed-by=Helm
      kubectl -n kube-system annotate rolebinding metrics-server-auth-reader meta.helm.sh/release-namespace=kube-system
      kubectl -n kube-system annotate rolebinding metrics-server-auth-reader meta.helm.sh/release-name=metrics-server
      
    • httpbin

      kubectl -n httpbin label ingress httpbin app.kubernetes.io/managed-by=Helm
      kubectl -n httpbin annotate ingress httpbin meta.helm.sh/release-namespace=httpbin
      kubectl -n httpbin annotate ingress httpbin meta.helm.sh/release-name=httpbin
      

AWS

You need to remove an asset we've updated from your assets directory:

rm $ASSETS_DIRECTORY/lokomotive-kubernetes/aws/flatcar-linux/kubernetes/workers.tf

Upgrading

lokocfg syntax changes

Before upgrading, make sure your lokocfg configuration follows the new v0.2.0 syntax.
Here we describe the changes.

DNS for the Packet platform

The DNS configuration syntax for the Packet platform has been simplified.

Here's an example for the Route 53 provider.

Old:

dns {
    zone = "<DNS_ZONE>"
    provider {
        route53 {
            zone_id = "<ZONE_ID>"
        }
    }
}

New:

dns {
    zone     = "<DNS_ZONE>"
    provider = "route53"
}

Check out the new syntax in the Packet configuration reference for details.

External DNS component

The owner_id field is now required.

Prometheus Operator component

There is a specific block for Grafana now.

Here's an example of the changed syntax.

Old:

component "prometheus-operator" {
    namespace              = "<NAMESPACE>"
    grafana_admin_password = "<GRAFANA_PASSWORD>"
    etcd_endpoints         = ["<ETCD_IP>"]
}

New:

component "prometheus-operator" {
    namespace = "<NAMESPACE>"
    grafana {
        admin_password = "<GRAFANA_PASSWORD>"
    }
    # etcd_endpoints is not needed anymore
}

Check out the new syntax in the Prometheus Operator configuration reference for details.

Upgrade

Go to your cluster's directory and run the following command.

lokoctl cluster apply

The update process typically takes about 10 minutes.
After the update, running lokoctl health should result in an output similar to the following.

Node                     Ready    Reason          Message

lokomotive-controller-0  True     KubeletReady    kubelet is posting ready status
lokomotive-1-worker-0    True     KubeletReady    kubelet is posting ready status
lokomotive-1-worker-1    True     KubeletReady    kubelet is posting ready status
lokomotive-1-worker-2    True     KubeletReady    kubelet is posting ready status
Name      Status    Message              Error

etcd-0    True      {"health":"true"}

If you have the cert-manager component installed, you will get an error on the first update and need to do a second one.
Run the following to upgrade your components again.

lokoctl component apply

Changes in v0.2.0

Kubernetes updates

  • Update Kubernetes to v1.18.3 (#459).

Component updates

  • openebs: update to 1.10.0 (#528).
  • dex: update to v2.24.0 (#525).
  • contour: update to v1.5.0 (#524).
  • cert-manager: update to v0.15.1 (#522).
  • calico: update to v3.14.1 (#415).
  • metrics-server: update to 0.3.6 (#343).
  • external-dns: update to 2.21.2 (#340).
  • rook: update to v1.3.1 (#300).
  • etcd: Update to v3.4.9 (#521).

New platforms

  • Add AKS platform support (#219).

Bugfixes

  • Handle OS interrupts in lokoctl to fix leaking terraform process (#483).
  • Fix self-hosted Kubelet on bare metal platform (#436). It wasn't working.
  • grafana: remove cluster label in kubelet dashboard (#474). This fixes missing information in the Kubelet Grafana dashboard.
  • Rook Ceph: Fix dashboard templating (#476). Some graphs were not showing information.
  • pod-checkpointer: update to pod-checkpointer image (#498). Fixes communication between the pod checkpointer and the kubelet.
  • Fix AWS worker pool handling (#367). Remove invisible worker pool of size 0 and fix NLB listener wiring to fix ingress.
  • Fix rendering of ingress_hosts in Contour component (#417). Fixes having a wildcard subdomain as ingress for Contour.
  • kube-apiserver: fix TLS handshake errors on Packet (#297). Removes harmless error message.
  • calico-host-protection: fix node name of HostEndpoint objects (#201). Fixes GlobalNetworkPolcies for nodes.

Features

  • aws: add the AWS EBS CSI driver (#423). This allows dynamic provisioning of Persistent Volunmes on AWS.
  • grafana: provide root_url in grafana.ini conf (#547). So Grafana exposes its URL and not localhost.
  • packet: add Cloudflare DNS support (#422).
  • Monitor etcd by default (#493). It wasn't being monitored before.
  • Add variable grafana_ingress_host to expose Grafana (#468). Allows exposing Grafana through Ingress.
  • Add ability to provide oidc configuration (#182). Allows to configure the API Server to use OIDC for authentication. Previously this was a manual operation.
  • Parameterise ClusterIssuer for Dex, Gangway, HTTPBin (#482). Allows using a different cluster issuer.
  • grafana: enable piechart plugin for the Prometheus Operator chart (#469). Pie chart graphs weren't showing.
  • Add a knob to disable self hosted kubelet (#425).
  • rook-ceph: add StorageClass config (#402). This allows setting up rook-ceph as the default storage class.
  • Add monitoring config and variable to rook component (#405). This allows monitoring rook.
  • packet: add support for hardware reservations (#299).
  • Add support for lokoctl component delete (#268).
  • bootkube: add calico-kube-controllers (#283).
  • metallb: add AlertManager rules (#140).
  • Label service-monitors so that they are discovered by Prometheus (#200). This makes sure all components are discovered by Prometheus.
  • external-dns: expose owner_id ([#207](#20...
Read more

v0.1.0

18 Mar 18:48
v0.1.0
Compare
Choose a tag to compare

Initial release.