Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running Crunchy Data's Postgres Operator with ScaleIO #169

Open
3 of 5 tasks
vladimirvivien opened this issue Jun 16, 2017 · 3 comments
Open
3 of 5 tasks

Running Crunchy Data's Postgres Operator with ScaleIO #169

vladimirvivien opened this issue Jun 16, 2017 · 3 comments
Assignees

Comments

@vladimirvivien
Copy link

vladimirvivien commented Jun 16, 2017

Operators are becoming a popular pattern to create applications to minimize complexities of operating software running on Kubernetes. Crunchy Data has developed a Kubernetes operator stack for deploying and operating a clustered Postgres on Kubernetes. This represents an opportunity for {code} to investigate the operator model and the feasibility of running a Postgres database cluster using the Kubernetes ScaleIO Volume plugin to provide storage.

  • Investigate how ScaleIO can be integrated in such as setup
  • Determine what is needed to make ScaleIO work with the operator
  • Get a Postgres cluster running using the Kubernetes ScaleIO volume plugin
  • Contribute changes to make ScaleIO work with operator (7/5)
  • A write up that explains how to run Postgres cluster with ScaleIO on Kubernetes (7/17)

Findings (so far)

  • The Postgres operator seems to not be compatible with 1.8.alpha at this point (understandable)
  • The current state of the Postgres operator will require (at least) configuration by way of PVC changes to work with ScaleIO
  • Already filed a bug issue with author; will continue as I find more.
  • The documented Getting started will not work with ScaleIO, changes to scripts are required
  • Working on a setup of changes to get operator to work with ScaleIO
  • Got Postgres cluster to run on ScaleIO using Kubernetes
@vladimirvivien
Copy link
Author

Update of what I found so far:

  • Research revealed Postgres Operator works best with version 1.6.x when following instructions from Crunchy data. That fact is not documented and created many, many rabbit holes.
  • Later version (1.7+) introduces authorization concerns that are not documented
  • The internal PVC used by the code to setup storage seems to ignore storageClassName spec even after adding it manually. This may require a code change for it to work properly.
  • But for now, a work around, for the issue above, has been to set up the ScaleIO storageCass as default.
  • Lastly, I am investigating an issue with ScaleIO itself. It is returning an error when multiple SDCs attempt to simply map to the same volume with error sio_mgr.go:143] scaleio: attachment for volume sio-7bc74644569e11e78c0f42010 failed :problem getting response: Only a single SDC may be mapped to this volume at a time
  • However, ScaleIO happly lets me manually map the same volume to multiple SDCs with no problems.

@vladimirvivien
Copy link
Author

Blockers

  • While I was able to run the operator with 1 k8s node, it fails with multiple nodes
  • There's an issue with the k8s ScaleIO Plugin where it does not support multiple instances mapped to same volume
  • The fix for that would be to change the k8s ScaleIO code to allow volumes to be mapped to multiple instances
  • That way the master Postgres pod can be mounted as RW while the stand-by replicas as RO
  • There are some code changes in the Postgres operator that would be needed as well to fix PVC to specify storageClassName

Recommendations

  • Delay any writeup/integration with ScaleIO until the blocker above is fixed
  • Once fixed, revisit the ScaleIO + Postgres Operator integration, write up, etc

@kacole2
Copy link

kacole2 commented Jul 19, 2017

@vladimirvivien i'm going to postpone the 7/31 blog since you have blockers here. Let me know if there is anything i'm missing or an updated target date

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants