DES: Single cluster, single repository #368
Unanswered
j-zimnowoda
asked this question in
Q&A
Replies: 2 comments 2 replies
-
So we are thinking about having: "Single repo for single cluster without layered approach". |
Beta Was this translation helpful? Give feedback.
2 replies
-
I believe we need to find out the plan, so we can perform this transition smoothly without disrupting other activities. Step 1Goal: Enable one cluster one repo approach Possible tasks
otomi-console/otomi-api
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I decided to share my thoughts about Otomi Containers Platform with hope and trust that you can benefit from my insight.
This message is not aimed to criticise anyone, instead it is my lessons learned that I want to share.
At the first glance, I saw a true benefit of having DRY configuration for all Kubernetes clusters..
After more than one year of working with it, I see that this apporach may not be a very pragmatic.
Here, I categorised some of my observations:
Operations and people’s fear
I was thinking about operators and trying to understand what they think, say and do.
In IT world engineers are scared touching production environments. This is why they like to have dev and staging ones.
If I were an operator of a cluster in the big company, I would never change common cluster settings.
Most probably, I would check the configuration at the test cluster and then copy it to production one (if I can do it one-to-one)
Transparency and visibility
Having one Git repo that describes many clusters has drawback of having an unnecessary noise in form of git commits.
I wouldn’t like to see commits from dev cluster if I operate production one.
I am at otomi-console in a cluster “A” but I am configuring secrets and services for cluster “B”,
At the same time I go to Apps and can access apps only form cluster “A”. It is a very confusing hybrid solution.
Usability
We have a wonderful mechanism to auto-prompt values properties but layered approach make it less user friendly as I need to remember what is in each layer.
It is really hard to start with values as these are complex beasts and layered approach does not make it easier.
I am afraid people will not use CE version, because entry level is too high.
Velocity of otomi development
We have developed a great auto form generation mechanism. Everything OpenApi spec driven!
But we couldn’t have done all platform settings due to complexity of implementing forms with layered approach.
We were also struggling about exposing otomi-console from different clusters, as we cannot guarantee synchronization.
We came up with solution that only one console from designated cluster may be available. Still intuitively I feel that this is not a robust solution and a very user friendly solution.
Security
Having all shared configuration may lead to having some security issues.
I do not believe it is right to share credentials for accessing DNS, KMS Object storage between the clusters.
So if I am so “smart” then how would I do it?
First of all, I do not consider myself as any kind of “guru”, rather a committed observer and proactive person.
From the very beginning, I saw a big potential in Otomi and I am a big fan of this product. Maybe, because I used to be a developer and operator, so I experienced the pain from both groups ;)
I imagine Otomi as a self contained platform that is define in single Git repository for a SINGLE cluster.
I believe within a single cluster we can still follow the DRY approach and not to compromise KISS principle.
Single cluster configuration described in a single repository means:
Of course Otomi still remains multi cluster solution as you can install it on each big cloud provide.
Please share you insights.
Beta Was this translation helpful? Give feedback.
All reactions