-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add samples? #32
Comments
Hey, thanks for reporting and sorry to hear you didn't get this working. These charts are supposed to good enough for other people to use, although it's only me who maintains them. Let me share with you the exact values that I use to deploy with this chart: env:
# -- Set the container timezone
TZ: Europe/London
# -- joplin-server base URL
APP_BASE_URL: https://joplin.myserver.com
# -- joplin-server listening port (same as Service port)
APP_PORT: 22300
# -- Use pg for postgres
DB_CLIENT: pg
# -- Postgres DB Host
POSTGRES_HOST: joplin-server-postgresql
# -- Postgres DB port
POSTGRES_PORT: # 5432
# -- Postgres DB name
POSTGRES_DATABASE: joplin
# -- Postgres DB Username
POSTGRES_USER: joplin
# -- Postgres DB password
POSTGRES_PASSWORD: joplin-pass
controller:
replicas: 2
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io
operator: In
values:
- joplin-server
topologyKey: kubernetes.io/hostname
resources:
requests:
cpu: 10m
memory: 192Mi
ingress:
main:
enabled: true
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
ingressClassName: "public"
hosts:
- host: joplin.myserver.com
paths:
- path: /
pathType: Prefix
tls:
- secretName: ingress-tls
hosts:
- joplin.myserver.com
# -- Enable and configure postgresql database subchart under this key.
# For more options see [postgresql chart documentation](https://github.com/bitnami/charts/tree/master/bitnami/postgresql)
postgresql:
enabled: true
auth:
postgresPassword: joplin-admin-pass
username: joplin
password: joplin-pass
database: joplin
primary:
persistence:
enabled: true
retain: true
storageClass: cstor
size: 2Gi
resources:
limits: {}
requests:
memory: 64Mi
cpu: 10m
priorityClassName: database There's a lot of stuff in my example that isn't exactly necessary (like running 2 replicas on different nodes) but I think the key to your problem is probably in the environment variables and the postgres credentials. Can you retry, explicitly setting the In the default values.yaml for this chart, those env vars are empty. It's tricky, because you do need to manually set the Let me know if you get this working with the extra options, and let me have a think about making this chart work out of the box so it's a better experience for everyone. |
I pasted that in verbatim, with these changes:
Unfortunately, I still get 404 Not Found. The Postgres server seems mostly happy, and in fact it looks like it's receiving queries from something, presumably Joplin (and then complaining about them):
The Joplin servers seem happy:
But shucks if I don't still have a generic 404 Not Found error. I admit this is out of my knowledge - any idea how to diagnose this? The other services on this cluster, including the working Joplin server that isn't going through Helm, are using a rather bogstandard service/ingress setup, at least as far as I know. I'm happy to keep testing stuff out but I'm a k8s novice; if debugging is worth your time, lemme know what to do :) For what it's worth, here's the doubtless-extremely-messy k8s file I'm using for my functional server: apiVersion: apps/v1
kind: Deployment
metadata:
name: joplin-server
labels:
app: joplin-server
spec:
selector:
matchLabels:
app: joplin-server
template:
metadata:
labels:
app: joplin-server
spec:
containers:
- name: joplin-server
image: joplin/server:2.10.8-beta
env:
- name: APP_BASE_URL
value: https://joplin.example.com
- name: APP_PORT
value: '22300'
- name: DB_CLIENT
value: pg
- name: POSTGRES_USER
value: joplin
- name: POSTGRES_PASSWORD
value: nope
- name: POSTGRES_DATABASE
value: joplin
- name: POSTGRES_PORT
value: '5432'
- name: POSTGRES_HOST
value: joplin-server-postgres
ports:
- containerPort: 22300
---
apiVersion: v1
kind: Service
metadata:
name: joplin-server
spec:
selector:
app: joplin-server
ports:
- protocol: "TCP"
port: 22300
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: joplin-server-postgres
labels:
app: joplin-server-postgres
spec:
selector:
matchLabels:
app: joplin-server-postgres
template:
metadata:
labels:
app: joplin-server-postgres
spec:
containers:
- name: joplin-server-postgres
image: postgres:15.1
env:
- name: POSTGRES_USER
value: joplin
- name: POSTGRES_PASSWORD
value: nope
- name: POSTGRES_DB
value: joplin
ports:
- containerPort: 5432
volumeMounts:
- name: joplin-server-mount
mountPath: /var/lib/postgresql/data
subPath: postgres
volumes:
- name: joplin-server-mount
persistentVolumeClaim:
claimName: joplin-server-claim
---
apiVersion: v1
kind: Service
metadata:
name: joplin-server-postgres
spec:
selector:
app: joplin-server-postgres
ports:
- protocol: "TCP"
port: 5432
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: joplin-server-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- joplin.example.com
secretName: joplin-server-kubernetes-tls
rules:
- host: "joplin.example.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: joplin-server
port:
number: 22300
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: joplin-server-claim
labels:
app: joplin-server
spec:
accessModes:
- ReadWriteOnce
storageClassName: do-block-storage
resources:
requests:
storage: 2Gi
(the volume claim is technically in a different file but I don't think this matters) |
Hey, sorry I've taken a few days to come back to you. If you're getting a 404 then one of two things is happening:
Judging by Without seeing your actual cluster it's hard to be more helpful than this |
My turn to apologize for the delay; I'm going through Employment Adventures (tm) and nobody enjoys those. Yep, it's on DigitalOcean. I tried This is probably not worth spending a bunch of time on; as mentioned, I do have Joplin working, just via manual Kubernetes descriptors instead of Helm. Helm is cool! I like Helm! But boy howdy is it hard to debug :V I'd be happy to keep working on it because I'm learning useful stuff about Kubernetes, but I also know "debugging via some guy who doesn't know what he's doing" is not a great experience. So don't worry about it :) Maybe in a few months someone will come along with the same problem and the stuff you did here will prove useful! Your call on what to do with the issue; if you want to leave it open as a reference, go for it, but I won't be offended if you close it. |
Is your feature request related to a problem ?
I recently tried to get the Joplin Server Helm chart working. I was not successful; I managed to get a whole bunch of 404 errors and not a lot more, despite gradually maneuvering my way through tricky k8s-at-home errors and doing a whole bunch of tweaking.
I eventually gave up and scrapped the Helm chart and just did it in straight k8s.
Describe the solution you'd like.
I don't know if this is intended for end-users to use or if it's just personal. If it's just personal, rock on, you do you :) but if it's intended for end-users, a working example deployment would be nice! For Joplin Server specifically, there's a few settings that absolutely need to be changed, and it's unclear how to get the ingress working correctly.
Describe alternatives you've considered.
[none]
Additional context.
Here's the Helmfile I ended up with, which didn't work:
I have no idea how close I was.
The text was updated successfully, but these errors were encountered: