Skip to content

Commit

Permalink
csi: update csi holder daemonset template
Browse files Browse the repository at this point in the history
Currently the holder daemonset is never updated
which will leaves the images the daemonset also
not updated. we should update the daemonset
template but not restart the csi holder pods
which causes the CSI volume access problem,
set the updateStrategy to OnDelete (already set in
yaml files) which allow us to update the holder
daemonset but not restart/update the pods, when a
pod is deleted or node is rebooted
the new changes will take effect.

Signed-off-by: Madhu Rajanna <[email protected]>
(cherry picked from commit 1ba3aa4)
  • Loading branch information
Madhu-1 committed Jul 13, 2023
1 parent b57f0c7 commit 848e1b3
Showing 1 changed file with 5 additions and 1 deletion.
6 changes: 5 additions & 1 deletion pkg/operator/ceph/csi/spec.go
Original file line number Diff line number Diff line change
Expand Up @@ -867,7 +867,11 @@ func (r *ReconcileCSI) configureHolder(driver driverDetails, c ClusterDetail, tp
_, err = r.context.Clientset.AppsV1().DaemonSets(r.opConfig.OperatorNamespace).Create(r.opManagerContext, cephPluginHolder, metav1.CreateOptions{})
if err != nil {
if kerrors.IsAlreadyExists(err) {
logger.Debugf("holder %q already exists for cluster %q, it should never be updated", cephPluginHolder.Name, c.cluster.Namespace)
_, err = r.context.Clientset.AppsV1().DaemonSets(r.opConfig.OperatorNamespace).Update(r.opManagerContext, cephPluginHolder, metav1.UpdateOptions{})
if err != nil {
return errors.Wrapf(err, "failed to update ceph plugin holder daemonset %q", cephPluginHolder.Name)
}
logger.Debugf("holder %q already exists for cluster %q, updating it, restart holder pods to take effect of update", cephPluginHolder.Name, c.cluster.Namespace)
} else {
return errors.Wrapf(err, "failed to start ceph plugin holder daemonset %q", cephPluginHolder.Name)
}
Expand Down

0 comments on commit 848e1b3

Please sign in to comment.