k3s.node.service failure on fresh install #290
-
After installing a fresh Ubuntu server 2204.2 LTS install on a Minisforum NUC wanted to install k8s with the k3s-ansible scripts. Created a my-cluster dir with the required changes in hosts.ini and all.yaml for 1 host as a test. If succesfull later to be expanded with more NUC's etc. Script runs fine untill: TASK [k3s/node : Enable and check K3s service] *************************************************************************************** Expected BehaviorEverything is happily and errofree installed and runing with a first very small K8S cluster on 1 node for starters. Current BehaviorTASK [k3s/node : Enable and check K3s service] *************************************************************************************** systemctl status k3s-node.service k3s-node.service - Lightweight Kubernetes journalctl -xeu k3s-node.service Steps to ReproduceSee above Context (variables)Operating system: Ubuntu server 2204.2 LTS Hardware: Minisforum NUC (Ryzen 5) Variables Used
k3s_version: "v1.24.12+k3s1"
ansible_user: NA
systemd_dir: "/etc/systemd/system"
flannel_iface: "bond0"
apiserver_endpoint: "192.168.1.202"
k3s_token: "NA"
extra_server_args: "--flannel-iface={{ flannel_iface }}
--node-ip={{ k3s_node_ip }}"
extra_agent_args: "{{ extra_args }}
{{ '--node-taint node-role.kubernetes.io/master=true:NoSchedule' if k3s_master_taint else '' }}
--tls-san {{ apiserver_endpoint }}
--disable servicelb
--disable traefik
--write-kubeconfig-mode 644"
kube_vip_tag_version: "v0.5.12"
# metallb type frr or native
metal_lb_type: "native"
# metallb mode layer2 or bgp
metal_lb_mode: "layer2"
# bgp options
# metal_lb_bgp_my_asn: "64513"
# metal_lb_bgp_peer_asn: "64512"
# metal_lb_bgp_peer_address: "192.168.30.1"
# image tag for metal lb
metal_lb_frr_tag_version: "v7.5.1"
metal_lb_speaker_tag_version: "v0.13.9"
metal_lb_controller_tag_version: "v0.13.9"
# metallb ip range for load balancer
metal_lb_ip_range: "192.168.1.220-192.168.1.240" Hosts
[master]
node001.calmus.one
# node002.calmus.one
# node003.calmus.one
[node]
node001.calmus.one
# node002.calmus.one
# node003.calmus.one
# only required if proxmox_lxc_configure: true
# must contain all proxmox instances that have a master or worker node
# [proxmox]
# 192.168.30.43
[k3s_cluster:children]
master
node Possible Solution
|
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 4 replies
-
When running the k3s script from from k3s.io on the CLI server itself everything runs and install fine. |
Beta Was this translation helpful? Give feedback.
-
Hmm, this is a gray area I think. We have support for single node and support for HA, but this is 1 master and 1 worker. Not sure if we support this. Also are you user you have the right interface name for both? Also, did you reset your cluster before running this or is it the same single node cluster? |
Beta Was this translation helpful? Give feedback.
Nope, you should just list it as master and the remove the taint in the args.