Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

external: enable the v2 port by default in downstream #748

Merged
merged 1 commit into from
Oct 10, 2024

Conversation

parth-gr
Copy link
Member

@parth-gr parth-gr commented Oct 8, 2024

for 4.18 we will use the default v2 settings from the cluster CR So ask python script to fetch only v2 port

Checklist:

  • Commit Message Formatting: Commit titles and messages follow guidelines in the developer guide.
  • Reviewed the developer guide on Submitting a Pull Request
  • Pending release notes updated with breaking and/or notable changes for the next minor release.
  • Documentation has been updated, if necessary.
  • Unit tests have been added, if necessary.
  • Integration tests have been added, if necessary.

Copy link

openshift-ci bot commented Oct 8, 2024

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: parth-gr

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@parth-gr
Copy link
Member Author

parth-gr commented Oct 8, 2024

Testing:

Even if the v2 flag is not passed

sh-5.1$ python3 a.py --rbd-data-pool-name ocs-storagecluster-cephblockpool
[{"name": "external-cluster-user-command", "kind": "ConfigMap", "data": {"args": "\"[Configurations]\nrgw-pool-prefix = default\nformat = json\ncephfs-filesystem-name = ocs-storagecluster-cephfilesystem\ncephfs-metadata-pool-name = ocs-storagecluster-cephfilesystem-metadata\ncephfs-data-pool-name = ocs-storagecluster-cephfilesystem-data0\nrbd-data-pool-name = ocs-storagecluster-cephblockpool\n\""}}, {"name": "rook-ceph-mon-endpoints", "kind": "ConfigMap", "data": {"data": "a=52.118.43.167:3300", "maxMonId": "0", "mapping": "{}"}}, {"name": "rook-ceph-mon", "kind": "Secret", "data": {"admin-secret": "admin-secret", "fsid": "daa862fa-94c5-40ef-be20-149ea9a01a16", "mon-secret": "mon-secret"}}, {"name": "rook-ceph-operator-creds", "kind": "Secret", "data": {"userID": "client.healthchecker", "userKey": "AQCIJQVn57ZjEhAAcvKs1IHv0vBh6jQOvFTJMg=="}}, {"name": "monitoring-endpoint", "kind": "CephCluster", "data": {"MonitoringEndpoint": "52.118.43.167", "MonitoringPort": "9283"}}, {"name": "rook-csi-rbd-node", "kind": "Secret", "data": {"userID": "csi-rbd-node", "userKey": "AQC/lP5mqk/zEhAA1QkynTIrmqktQEiTNEwchQ=="}}, {"name": "rook-csi-rbd-provisioner", "kind": "Secret", "data": {"userID": "csi-rbd-provisioner", "userKey": "AQC/lP5m0YLdBxAAlsCcwxsuxBN6+c3GWKxGCw=="}}, {"name": "rook-csi-cephfs-provisioner", "kind": "Secret", "data": {"adminID": "csi-cephfs-provisioner", "adminKey": "AQC/lP5m17SqHRAALzlIWtCwbvTb1QQXWV/ipg=="}}, {"name": "rook-csi-cephfs-node", "kind": "Secret", "data": {"adminID": "csi-cephfs-node", "adminKey": "AQC/lP5m8Dt2KBAAHzdeaGmePWkWzBozHuFY1A=="}}, {"name": "ceph-rbd", "kind": "StorageClass", "data": {"pool": "ocs-storagecluster-cephblockpool", "csi.storage.k8s.io/provisioner-secret-name": "rook-csi-rbd-provisioner", "csi.storage.k8s.io/controller-expand-secret-name": "rook-csi-rbd-provisioner", "csi.storage.k8s.io/node-stage-secret-name": "rook-csi-rbd-node"}}, {"name": "cephfs", "kind": "StorageClass", "data": {"fsName": "ocs-storagecluster-cephfilesystem", "pool": "ocs-storagecluster-cephfilesystem-data0", "csi.storage.k8s.io/provisioner-secret-name": "rook-csi-cephfs-provisioner", "csi.storage.k8s.io/controller-expand-secret-name": "rook-csi-cephfs-provisioner", "csi.storage.k8s.io/node-stage-secret-name": "rook-csi-cephfs-node"}}]

sh-5.1$ ceph quorum_status --format json 

{"election_epoch":124,"quorum":[0,1,2],"quorum_names":["a","c","d"],"quorum_leader_name":"a","quorum_age":13114,"features":{"quorum_con":"4540138322906710015","quorum_mon":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"]},"monmap":{"epoch":4,"fsid":"daa862fa-94c5-40ef-be20-149ea9a01a16","modified":"2024-10-08T08:11:02.429753Z","created":"2024-10-03T12:56:58.977731Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"1","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"a","public_addrs":{"addrvec":[{"type":"v2","addr":"52.118.43.167:3300","nonce":0},{"type":"v1","addr":"52.118.43.167:6789","nonce":0}]},"addr":"52.118.43.167:6789/0","public_addr":"52.118.43.167:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"c","public_addrs":{"addrvec":[{"type":"v2","addr":"52.118.43.166:3300","nonce":0},{"type":"v1","addr":"52.118.43.166:6789","nonce":0}]},"addr":"52.118.43.166:6789/0","public_addr":"52.118.43.166:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"d","public_addrs":{"addrvec":[{"type":"v2","addr":"52.118.43.168:3300","nonce":0},{"type":"v1","addr":"52.118.43.168:6789","nonce":0}]},"addr":"52.118.43.168:6789/0","public_addr":"52.118.43.168:6789/0","priority":0,"weight":0,"crush_location":"{}"}]}}
sh-5.1$ 

@parth-gr
Copy link
Member Author

parth-gr commented Oct 9, 2024

/assign @travisn

for 4.18 we will use the default v2 settings from the cluster CR
So ask python script to fetch only v2 port

Signed-off-by: parth-gr <[email protected]>
@travisn
Copy link

travisn commented Oct 9, 2024

LGTM after the CI passes

@travisn travisn merged commit 35ee4e1 into red-hat-storage:master Oct 10, 2024
50 of 54 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants