-
I have cloned your repo, and just changed the addresses to match my infrastructure. When I run it, the playbook runs with no errors, however, on checking I notice that Traefik is still installed, even though I have the default of --disable traefik. When --disable traefik is entered Traefik should be omitted from the installation. Steps to Reproduce
Context (variables)Target OS is Ubuntu 20.04 Variables UsedDirectly from your repo Hosts
[master]
k3s0_test ansible_host=192.168.92.16 ansible_user=?????? nasible_ssh_pass=??????
[node]
[k3s_cluster:children]
master
node Possible Solution
|
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 1 reply
-
Hey, can you please fill out the full template? It seems that what you are experiencing is actually something wrong with your configuration in the |
Beta Was this translation helpful? Give feedback.
-
It is basically your all.yaml, with just ip addresses changed. ---
k3s_version: v1.24.12+k3s1
# this is the user that has ssh access to these machines
ansible_user: ansibleuser
systemd_dir: /etc/system/system
# Set your timezone
system_timezone: "Your/Timezone"
# interface which will be used for flannel
flannel_iface: "eth0"
# apiserver_endpoint is virtual ip-address which will be configured on each master
apiserver_endpoint: "192.168.92.16"
# k3s_token is required masters can talk together securely
# this token should be alpha numeric only
k3s_token: "some-SUPER-DEDEUPER-secret-password"
# The IP on which the node is reachable in the cluster.
# Here, a sensible default is provided, you can still override
# it for each of your hosts, though.
k3s_node_ip: '{{ ansible_facts[flannel_iface]["ipv4"]["address"] }}'
# Disable the taint manually by setting: k3s_master_taint = false
k3s_master_taint: "{{ true if groups['node'] | default([]) | length >= 1 else false }}"
# these arguments are recommended for servers as well as agents:
extra_args: >-
--flannel-iface={{ flannel_iface }}
--node-ip={{ k3s_node_ip }}
# change these to your liking, the only required are: --disable servicelb, --tls-san {{ apiserver_endpoint }}
extra_server_args: >-
{{ extra_args }}
{{ '--node-taint node-role.kubernetes.io/master=true:NoSchedule' if k3s_master_taint else '' }}
--tls-san {{ apiserver_endpoint }}
--disable servicelb
--disable traefik
extra_agent_args: >-
{{ extra_args }}
# image tag for kube-vip
kube_vip_tag_version: "v0.5.11"
# metallb type frr or native
metal_lb_type: "native"
# metallb mode layer2 or bgp
metal_lb_mode: "layer2"
# bgp options
# metal_lb_bgp_my_asn: "64513"
# metal_lb_bgp_peer_asn: "64512"
# metal_lb_bgp_peer_address: "192.168.30.1"
# image tag for metal lb
metal_lb_frr_tag_version: "v7.5.1"
metal_lb_speaker_tag_version: "v0.13.9"
metal_lb_controller_tag_version: "v0.13.9"
# metallb ip range for load balancer
metal_lb_ip_range: "192.168.92.20-192.168.92.24"
# Only enable if your nodes are proxmox LXC nodes, make sure to configure your proxmox nodes
# in your hosts.ini file.
# Please read https://gist.github.com/triangletodd/02f595cd4c0dc9aac5f7763ca2264185 before using this.
# Most notably, your containers must be privileged, and must not have nesting set to true.
# Please note this script disables most of the security of lxc containers, with the trade off being that lxc
# containers are significantly more resource efficent compared to full VMs.
# Mixing and matching VMs and lxc containers is not supported, ymmv if you want to do this.
# I would only really recommend using this if you have particularly low powered proxmox nodes where the overhead of
# VMs would use a significant portion of your available resources.
proxmox_lxc_configure: false
# the user that you would use to ssh into the host, for example if you run ssh some-user@my-proxmox-host,
# set this value to some-user
proxmox_lxc_ssh_user: root
# the unique proxmox ids for all of the containers in the cluster, both worker and master nodes
proxmox_lxc_ct_ids:
- 200
- 201
- 202
- 203
- 204 |
Beta Was this translation helpful? Give feedback.
-
I can confirm this issue. It started 2-3 weeks ago. I use this extra for the server args - and k3s or Tim's script simply doesn't care. It used to work before.
Here my workaround:
I don't understand why these flags are no longer working. |
Beta Was this translation helpful? Give feedback.
-
Sorry, the flags still work. Everything is working as expected. I just created a new cluster and it works fine. I would check the troubleshooting guide again and reset your cluster. ➜ k3s-ansible git:(master) ✗ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-6fb79bc456-c6jmz 1/1 Running 0 12m
default nginx-6fb79bc456-cmrr9 1/1 Running 0 12m
default nginx-6fb79bc456-d2kwv 1/1 Running 0 12m
kube-system coredns-7b5bbc6644-l4qxg 1/1 Running 1 (16m ago) 17m
kube-system kube-vip-ds-sq5ng 1/1 Running 1 (16m ago) 17m
kube-system kube-vip-ds-sqhjp 1/1 Running 1 (16m ago) 17m
kube-system kube-vip-ds-xc67x 1/1 Running 1 (16m ago) 17m
kube-system local-path-provisioner-687d6d7765-7pzxm 1/1 Running 1 (16m ago) 17m
kube-system metrics-server-667586758d-7sbkf 1/1 Running 1 (16m ago) 17m
metallb-system controller-c6c466d64-l4h6j 1/1 Running 0 17m
metallb-system speaker-968vb 1/1 Running 0 17m
metallb-system speaker-dmlpw 1/1 Running 0 17m
metallb-system speaker-k5tpz 1/1 Running 0 17m
metallb-system speaker-tlbmj 1/1 Running 0 15m
metallb-system speaker-x9d2j 1/1 Running 0 15m |
Beta Was this translation helpful? Give feedback.
-
@timothystewart6 I am using a different version for k3s: k3s_version: v1.24.12+k3s1 k3s_version: v1.26.3+k3s1 << my version |
Beta Was this translation helpful? Give feedback.
-
Confirm that your Flannel Interface configuration is actually named |
Beta Was this translation helpful? Give feedback.
Confirm that your Flannel Interface configuration is actually named
eth0
. If you configure that incorrectly it can cause thek3s_node_ip
variable to not resolve. The error can get swallowed by thedefault('')
flag here and result in the extra arguments not getting applied.