Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot modify ingress for arbitrary TCP services per microk8s docs #3025

Open
jdevries3133 opened this issue Apr 3, 2022 · 5 comments
Open

Comments

@jdevries3133
Copy link

jdevries3133 commented Apr 3, 2022

The documentation for microk8s's ingress add-on describes how TCP services can be exposed on arbitrary ports. It shows an example of a Redis service being exposed on port 6739 by a simple changes to the Nginx ingress daemonset.

I am trying to deploy a mail server inside my cluster via the mailu helm chart. It seems like this deployment is up and running fine, I just need to figure out how to expose it.

Minimal Example

I tried to change the ingress daemonset to ask it to listen on port 25 (for starters) by making the following change via kubectl edit -n ingress daemonset.apps/nginx-ingress-microk8s-controller:

         - containerPort: 10254
           hostPort: 10254
           protocol: TCP
+        - containerPort: 25
+          hostPort: 25
+          protocol: TCP
         readinessProbe:
           failureThreshold: 3
           httpGet:

After applying this change, the ingress pods restart successfully on two of my three nodes. On one of the three nodes, though, I get the following failure:

Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  5m6s   default-scheduler  0/3 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 2 node(s) didn't match Pod's node affinity/selector.
  Warning  FailedScheduling  3m51s  default-scheduler  0/3 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 2 node(s) didn't match Pod's node affinity/selector.

Full Example

Later, I added the rest of the mail ports to the daemonset and the tcp-services configmap, so that they look like this:

# configmap/nginx-ingress-tcp-microk8s-conf

apiVersion: v1
data:
  "25": mailu/mailu-front:25
  "110": mailu/mailu-front:110
  "143": mailu/mailu-front:143
  "465": mailu/mailu-front:465
  "587": mailu/mailu-front:587
  "993": mailu/mailu-front:993
  "995": mailu/mailu-front:995
kind: ConfigMap
metadata: ...  # omitted for brevity
+++ additions to daemonset.apps/nginx-ingress-microk8s-controller

          - containerPort: 10254
           hostPort: 10254
           protocol: TCP
+        - containerPort: 25
+          hostPort: 25
+          name: tcp-25
+          protocol: TCP
+        - containerPort: 110
+          hostPort: 110
+          name: tcp-110
+          protocol: TCP
+        - containerPort: 143
+          hostPort: 143
+          name: tcp-143
+          protocol: TCP
+        - containerPort: 465
+          hostPort: 465
+          name: tcp-465
+          protocol: TCP
+        - containerPort: 587
+          hostPort: 587
+          name: tcp-587
+          protocol: TCP
+        - containerPort: 993
+          hostPort: 993
+          name: tcp-993
+          protocol: TCP
+        - containerPort: 995
+          hostPort: 995
+          name: tcp-995
+          protocol: TCP
         readinessProbe:
           failureThreshold: 3
           httpGet:

It does seem like the problem is happening on the same node. It's so weird to me that it's only going wrong on one of the three nodes! Of course, I've checked and double-checked the output of sudo lsof -i -P -n | grep LISTEN on all the hosts to ensure that port 25 is free on all of them.

The Plot (my confusion) Thickens

Then again, I just took a look at one of the nodes that was "working." The nginx pod on this host looks happy, it's apparently bound to all the ports I've asked for:

Name:         nginx-ingress-microk8s-controller-fgr27
Namespace:    ingress
Priority:     0
Node:         big-boi/192.168.1.239
Start Time:   Sat, 02 Apr 2022 21:03:37 -0400
Labels:       controller-revision-hash=5c58fcc4bd
              name=nginx-ingress-microk8s
              pod-template-generation=19
Annotations:  cni.projectcalico.org/podIP: 10.1.139.46/32
              cni.projectcalico.org/podIPs: 10.1.139.46/32
              kubectl.kubernetes.io/restartedAt: 2022-04-02T21:03:24-04:00
Status:       Running
IP:           10.1.139.46
IPs:
  IP:           10.1.139.46
Controlled By:  DaemonSet/nginx-ingress-microk8s-controller
Containers:
  nginx-ingress-microk8s:
    Container ID:  containerd://fd9d0e63d79e013f702d556c3ec5bd6111d231042fd10a185d3206331feb08d1
    Image:         k8s.gcr.io/ingress-nginx/controller:v1.1.0
    Image ID:      k8s.gcr.io/ingress-nginx/controller@sha256:f766669fdcf3dc26347ed273a55e754b427eb4411ee075a53f30718b4499076a
    Ports:         80/TCP, 443/TCP, 10254/TCP, 25/TCP, 110/TCP, 143/TCP, 465/TCP, 587/TCP, 993/TCP, 995/TCP
    Host Ports:    80/TCP, 443/TCP, 10254/TCP, 25/TCP, 110/TCP, 143/TCP, 465/TCP, 587/TCP, 993/TCP, 995/TCP
    Args:
      /nginx-ingress-controller
      --configmap=$(POD_NAMESPACE)/nginx-load-balancer-microk8s-conf
      --tcp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-tcp-microk8s-conf
      --udp-services-configmap=$(POD_NAMESPACE)/nginx-ingress-udp-microk8s-conf
      --ingress-class=public

      --publish-status-address=127.0.0.1
    State:          Running
      Started:      Sat, 02 Apr 2022 21:03:39 -0400
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:10254/healthz delay=10s timeout=5s period=10s #success=1 #failure=3
    Readiness:      http-get http://:10254/healthz delay=0s timeout=5s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       nginx-ingress-microk8s-controller-fgr27 (v1:metadata.name)
      POD_NAMESPACE:  ingress (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nztfv (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  kube-api-access-nztfv:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                             node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists
                             node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                             node.kubernetes.io/unreachable:NoExecute op=Exists
                             node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:                      <none>

BUT, if I ssh into that node and run my trusty sudo lsof -i -P -n | grep LISTEN, I don't see any processes bound to those ports! What?!? At this point, you can put my down for thoroughly confused –– my confusion only grows as I continue to draft this issue.

systemd-r    926 systemd-resolve   13u  IPv4   18178      0t0  TCP 127.0.0.53:53 (LISTEN)
sshd         978            root    3u  IPv4   18187      0t0  TCP *:22 (LISTEN)
sshd         978            root    4u  IPv6   18189      0t0  TCP *:22 (LISTEN)
python3     4408            root    5u  IPv4   42829      0t0  TCP *:25000 (LISTEN)
python3     4458            root    5u  IPv4   42829      0t0  TCP *:25000 (LISTEN)
k8s-dqlit   4515            root   15u  IPv4   48147      0t0  TCP 192.168.1.239:19001 (LISTEN)
container   5015            root   12u  IPv4   40892      0t0  TCP 127.0.0.1:1338 (LISTEN)
container   5015            root   16u  IPv4   45848      0t0  TCP 127.0.0.1:34399 (LISTEN)
kubelite    5357            root    8u  IPv6   52229      0t0  TCP *:16443 (LISTEN)
kubelite    5357            root  111u  IPv4   53427      0t0  TCP 127.0.0.1:10256 (LISTEN)
kubelite    5357            root  121u  IPv6   50532      0t0  TCP *:10255 (LISTEN)
kubelite    5357            root  122u  IPv6   52308      0t0  TCP *:10250 (LISTEN)
kubelite    5357            root  123u  IPv4   52310      0t0  TCP 127.0.0.1:10248 (LISTEN)
kubelite    5357            root  134u  IPv6   47860      0t0  TCP *:10259 (LISTEN)
kubelite    5357            root  135u  IPv6   48641      0t0  TCP *:10257 (LISTEN)
kubelite    5357            root  138u  IPv4   53429      0t0  TCP 127.0.0.1:10249 (LISTEN)
node_expo   6016          nobody    3u  IPv4   48743      0t0  TCP 127.0.0.1:9100 (LISTEN)
kube-rbac   6072           65532    3u  IPv4  104500      0t0  TCP 192.168.1.239:9100 (LISTEN)
calico-no   6385            root    7u  IPv4   54604      0t0  TCP 127.0.0.1:9099 (LISTEN)
livenessp   8595            root    7u  IPv6   68658      0t0  TCP *:9808 (LISTEN)

For extra context, the mail service looks happy and healthy, though I haven't tested it thoroughly. Here is a birds-eye view of what that looks like. This also shows the mailu/mailu-front service that I'm targeting in the configmap from before:

➜  mailu git:(feat/email) ✗ k get all -n mailu
NAME                                  READY   STATUS    RESTARTS   AGE
pod/mailu-dovecot-65b587778-d4hjf     1/1     Running   0          6h
pod/mailu-roundcube-5d54df68b-nvr67   1/1     Running   0          6h
pod/mailu-rspamd-69d799fff5-8mb2z     1/1     Running   0          6h
pod/mailu-redis-8556964f48-tbz6s      1/1     Running   0          6h
pod/mailu-postfix-dd9f9ccf4-82dvz     1/1     Running   0          6h
pod/mailu-clamav-559d455d78-5glhh     1/1     Running   0          6h
pod/mailu-admin-75d978b499-874sj      1/1     Running   0          6h
pod/mailu-front-685d75b776-jml4w      1/1     Running   0          5h5m

NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                                                             AGE
service/mailu-clamav      ClusterIP   10.152.183.8     <none>        3310/TCP                                                                            6h52m
service/mailu-postfix     ClusterIP   10.152.183.12    <none>        25/TCP,465/TCP,587/TCP,10025/TCP                                                    6h52m
service/mailu-admin       ClusterIP   10.152.183.210   <none>        80/TCP                                                                              6h52m
service/mailu-dovecot     ClusterIP   10.152.183.11    <none>        2102/TCP,2525/TCP,143/TCP,110/TCP,4190/TCP                                          6h52m
service/mailu-rspamd      ClusterIP   10.152.183.124   <none>        11332/TCP,11334/TCP                                                                 6h52m
service/mailu-redis       ClusterIP   10.152.183.111   <none>        6379/TCP                                                                            6h52m
service/mailu-roundcube   ClusterIP   10.152.183.244   <none>        80/TCP                                                                              6h52m
service/mailu-front       ClusterIP   10.152.183.110   <none>        110/TCP,995/TCP,143/TCP,993/TCP,25/TCP,465/TCP,587/TCP,10025/TCP,10143/TCP,80/TCP   6h52m

NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/mailu-clamav      1/1     1            1           6h52m
deployment.apps/mailu-front       1/1     1            1           6h52m
deployment.apps/mailu-postfix     1/1     1            1           6h52m
deployment.apps/mailu-redis       1/1     1            1           6h52m
deployment.apps/mailu-rspamd      1/1     1            1           6h52m
deployment.apps/mailu-admin       1/1     1            1           6h52m
deployment.apps/mailu-dovecot     1/1     1            1           6h52m
deployment.apps/mailu-roundcube   1/1     1            1           6h52m

NAME                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/mailu-front-65f76bc8f6      0         0         0       6h39m
replicaset.apps/mailu-front-685d75b776      1         1         1       6h29m
replicaset.apps/mailu-clamav-559d455d78     1         1         1       6h52m
replicaset.apps/mailu-front-5ff698bfcb      0         0         0       6h52m
replicaset.apps/mailu-postfix-dd9f9ccf4     1         1         1       6h52m
replicaset.apps/mailu-redis-8556964f48      1         1         1       6h52m
replicaset.apps/mailu-rspamd-69d799fff5     1         1         1       6h52m
replicaset.apps/mailu-admin-75d978b499      1         1         1       6h52m
replicaset.apps/mailu-dovecot-65b587778     1         1         1       6h52m
replicaset.apps/mailu-admin-cfd89c668       0         0         0       74m
replicaset.apps/mailu-dovecot-6dc5979f4     0         0         0       74m
replicaset.apps/mailu-roundcube-5d54df68b   1         1         1       6h52m
replicaset.apps/mailu-roundcube-d7db6db74   0         0         0       73m

Context

I am running microk8s version 1.23. I have a three-node cluster.

output of microk8s inspect

@stale
Copy link

stale bot commented Feb 27, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the inactive label Feb 27, 2023
@jdevries3133
Copy link
Author

Ping to keep the stale bot away – still awaiting a response.

@stale stale bot removed the inactive label Mar 1, 2023
Copy link

stale bot commented Jan 25, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the inactive label Jan 25, 2024
@neoaggelos
Copy link
Contributor

neoaggelos commented Jan 30, 2024

Hi @jdevries3133, not sure if this is still relevant, sorry for missing this issue (twice) in the past, got here by stale bot as well.

This is probably an issue that folks from nginx-ingress would be more knowledgeable about, it does not seem that there is something MicroK8s is doing that would be getting in the way.

@stale stale bot removed the inactive label Jan 30, 2024
Copy link

stale bot commented Dec 25, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added inactive and removed inactive labels Dec 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants