You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 29, 2022. It is now read-only.
I followed the tutorial in the docs for the most basic AWS deployment with httpbin, but it never completes successfully.
First it show me this: Error: error executing "/tmp/terraform_1740352793.sh": Process exited with status 1
FATA[1472] Applying cluster failed: applying platform: failed checking execution status: exit status 1 args="[]" command="lokoctl cluster apply"
If I run again it shows me this: Do you want to proceed with cluster apply? [type "yes" to continue]: yes
FATA[0044] Applying cluster failed: ensuring controlplane component "bootstrap-secrets": checking for chart history: Kubernetes cluster unreachable: Get "https://qa.rocsys.cloud:6443/version?timeout=32s": dial tcp 3.10.229.4:6443: connect: connection refused args="[]" command="lokoctl cluster apply"
Impact
I mean, no rush, but I wanted to take what I have on bare metal for test/development to a QA environment, and Ideally I want to remain using Flatcar and Lokomotive.
Environment and steps to reproduce
Set-up: All latest versions since Friday when I started.
Task: lokoctl apply -v
Action(s): Command above
Error: Put that in the description, however I check some logs and it tells me something:
5 ^[[0m^[[0mmodule.aws-qa.null_resource.bootkube-start (remote-exec): Job for bootkube.service failed because the control process exited with error code.
6 ^[[0m^[[0mmodule.aws-qa.null_resource.bootkube-start (remote-exec): See "systemctl status bootkube.service" and "journalctl -xe" for details.
7 ^[[0m^[[0mmodule.aws-qa.null_resource.bootkube-start (remote-exec): -- Journal begins at Sun 2021-10-03 09:44:30 UTC, ends at Sun 2021-10-03 10:07:57 UTC. --
8 ^[[0m^[[0mmodule.aws-qa.null_resource.bootkube-start (remote-exec): Oct 03 09:47:47 ip-10-0-3-226.eu-west-2.compute.internal systemd[1]: Starting Bootstrap a Kubernetes cluster. ..
9 ^[[0m^[[0mmodule.aws-qa.null_resource.bootkube-start (remote-exec): Oct 03 09:47:47 ip-10-0-3-226.eu-west-2.compute.internal bootkube-start[4583]: Unable to find image 'quay.io/ kinvolk/bootkube:v0.14.0-helm4' locally
10 ^[[0m^[[0mmodule.aws-qa.null_resource.bootkube-start (remote-exec): Oct 03 09:47:48 ip-10-0-3-226.eu-west-2.compute.internal bootkube-start[4583]: v0.14.0-helm4: Pulling from ki nvolk/bootkube
11 ^[[0m^[[0mmodule.aws-qa.null_resource.bootkube-start (remote-exec): Oct 03 09:47:48 ip-10-0-3-226.eu-west-2.compute.internal bootkube-start[4583]: 9f8e6f34f591: Pulling fs layer
12 ^[[0m^[[0mmodule.aws-qa.null_resource.bootkube-start (remote-exec): Oct 03 09:47:53 ip-10-0-3-226.eu-west-2.compute.internal bootkube-start[4583]: 9f8e6f34f591: Verifying Checks um
13 ^[[0m^[[0mmodule.aws-qa.null_resource.bootkube-start (remote-exec): Oct 03 09:47:53 ip-10-0-3-226.eu-west-2.compute.internal bootkube-start[4583]: 9f8e6f34f591: Download complet e
14 ^[[0m^[[0mmodule.aws-qa.null_resource.bootkube-start (remote-exec): Oct 03 09:47:56 ip-10-0-3-226.eu-west-2.compute.internal bootkube-start[4583]: 9f8e6f34f591: Pull complete
15 ^[[0m^[[0mmodule.aws-qa.null_resource.bootkube-start (remote-exec): Oct 03 09:47:56 ip-10-0-3-226.eu-west-2.compute.internal bootkube-start[4583]: Digest: sha256:d34f91c7f9dc9c4 a3e72bdd39e346d69206dcce88c350a28800d0de0bfb01d79
16 ^[[0m^[[0mmodule.aws-qa.null_resource.bootkube-start (remote-exec): Oct 03 09:47:56 ip-10-0-3-226.eu-west-2.compute.internal bootkube-start[4583]: Status: Downloaded newer image for quay.io/kinvolk/bootkube:v0.14.0-helm4
17 ^[[0m^[[0mmodule.aws-qa.null_resource.bootkube-start (remote-exec): Oct 03 09:47:57 ip-10-0-3-226.eu-west-2.compute.internal bootkube-start[4583]: Starting temporary bootstrap c ontrol plane...
18 ^[[0m^[[0mmodule.aws-qa.null_resource.bootkube-start (remote-exec): Oct 03 09:47:57 ip-10-0-3-226.eu-west-2.compute.internal bootkube-start[4583]: Waiting for api-server...
19 ^[[0m^[[0mmodule.aws-qa.null_resource.bootkube-start (remote-exec): Oct 03 09:48:02 ip-10-0-3-226.eu-west-2.compute.internal bootkube-start[4583]: W1003 09:48:02.464006 1 util.go:22] Unable to determine api-server readiness: API Server http status: 0
20 ^[[0m^[[0mmodule.aws-qa.null_resource.bootkube-start (remote-exec): Oct 03 09:48:07 ip-10-0-3-226.eu-west-2.compute.internal bootkube-start[4583]: W1003 09:48:07.458396 1 util.go:22] Unable to determine api-server readiness: API Server http status: 0
21 ^[[0m^[[0mmodule.aws-qa.null_resource.bootkube-start (remote-exec): Oct 03 09:48:12 ip-10-0-3-226.eu-west-2.compute.internal bootkube-start[4583]: W1003 09:48:12.458074 1 util.go:22] Unable to determine api-server readiness: API Server http status: 0
22 ^[[0m^[[0mmodule.aws-qa.null_resource.bootkube-start (remote-exec): Oct 03 09:48:17 ip-10-0-3-226.eu-west-2.compute.internal bootkube-start[4583]: W1003 09:48:17.460637 1 util.go:22] Unable to determine api-server readiness: API Server http status: 0
23 ^[[0m^[[0mmodule.aws-qa.null_resource.bootkube-start (remote-exec): Oct 03 09:48:22 ip-10-0-3-226.eu-west-2.compute.internal bootkube-start[4583]: W1003 09:48:22.459039 1 util.go:22] Unable to determine api-server readiness: API Server http status: 0
24 ^[[0m^[[0mmodule.aws-qa.null_resource.bootkube-start (remote-exec): Oct 03 09:48:27 ip-10-0-3-226.eu-west-2.compute.internal bootkube-start[4583]: W1003 09:48:27.458100 1 util.go:22] Unable to determine api-server readiness: API Server http status: 0
25 ^[[0m^[[0mmodule.aws-qa.null_resource.bootkube-start (remote-exec): Oct 03 09:48:32 ip-10-0-3-226.eu-west-2.compute.internal bootkube-start[4583]: W1003 09:48:32.458002 1
Expected behavior
The tutorial followed step-by-step ending up with a working basic install.
Additional information
The one thing I can think off (I am relatively new to most of this) is that there are no entries created in the DNS zone config for the api server... Your installer creates on for etcd though, so maybe it is this?
Another very likely scenario is that I have an issue in my Route53 configuration, as I did not know this before I started your tutorial :p
Also, what is really strange to be is that when you wait a while and I ping the dns name created (qa.rocsys.cloud) it resolves to an 18.x.x.x address, which is what AWS says is the public ip of my ec2 instance for controller node. However as soon as I run lokoctl apply this changes into a 3.x.x.x address, which it remains while pinging unless you wait long enough...
The text was updated successfully, but these errors were encountered:
Description
I followed the tutorial in the docs for the most basic AWS deployment with httpbin, but it never completes successfully.
First it show me this: Error: error executing "/tmp/terraform_1740352793.sh": Process exited with status 1
FATA[1472] Applying cluster failed: applying platform: failed checking execution status: exit status 1 args="[]" command="lokoctl cluster apply"
If I run again it shows me this: Do you want to proceed with cluster apply? [type "yes" to continue]: yes
FATA[0044] Applying cluster failed: ensuring controlplane component "bootstrap-secrets": checking for chart history: Kubernetes cluster unreachable: Get "https://qa.rocsys.cloud:6443/version?timeout=32s": dial tcp 3.10.229.4:6443: connect: connection refused args="[]" command="lokoctl cluster apply"
Impact
I mean, no rush, but I wanted to take what I have on bare metal for test/development to a QA environment, and Ideally I want to remain using Flatcar and Lokomotive.
Environment and steps to reproduce
Expected behavior
The tutorial followed step-by-step ending up with a working basic install.
Additional information
The one thing I can think off (I am relatively new to most of this) is that there are no entries created in the DNS zone config for the api server... Your installer creates on for etcd though, so maybe it is this?
Another very likely scenario is that I have an issue in my Route53 configuration, as I did not know this before I started your tutorial :p
Also, what is really strange to be is that when you wait a while and I ping the dns name created (qa.rocsys.cloud) it resolves to an 18.x.x.x address, which is what AWS says is the public ip of my ec2 instance for controller node. However as soon as I run lokoctl apply this changes into a 3.x.x.x address, which it remains while pinging unless you wait long enough...
The text was updated successfully, but these errors were encountered: