Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

deploying services automatically #284

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 9 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -171,8 +171,11 @@ docker-buildx: ## Build and push docker image for the manager for cross-platform

##@ Deployment

CLOUD_ID := $(shell oc get clusterversion -o jsonpath='{.items[].spec.clusterID}{"\n"}')
INGRESS_HOST := o2ims.$(shell oc get ingresscontrollers.operator.openshift.io -n openshift-ingress-operator default -o jsonpath='{.status.domain}{"\n"}')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two lines need to be removed.


ifndef ignore-not-found
ignore-not-found = false
ignore-not-found = true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to be reverted

endif

.PHONY: install
Expand All @@ -187,7 +190,12 @@ uninstall: manifests kustomize kubectl ## Uninstall CRDs from the K8s cluster sp
deploy: manifests kustomize kubectl ## Deploy controller to the K8s cluster specified in ~/.kube/config.
@$(KUBECTL) create configmap env-config --from-literal=HWMGR_PLUGIN_NAMESPACE=$(HWMGR_PLUGIN_NAMESPACE) --dry-run=client -o yaml > config/manager/env-config.yaml
cd config/manager && $(KUSTOMIZE) edit set image controller=${IMG}
cp config/manager/inventory.yaml config/manager/inventory.back
sed -i 's/ingressHost:.*/ingressHost: $(INGRESS_HOST)/' config/manager/inventory.yaml
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ingress host handling needs to be removed. This will be handled automatically with the patch that I provided you.

cat config/manager/inventory.yaml
$(KUSTOMIZE) build config/$(KUSTOMIZE_OVERLAY) | $(KUBECTL) apply -f -
cp config/manager/inventory.back config/manager/inventory.yaml
rm -f config/manager/inventory.back
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not really sure I understand what is being accomplished with the copying of the inventory.yaml file. Could you explain why this is necessary.


.PHONY: undeploy
undeploy: kustomize kubectl ## Undeploy controller from the K8s cluster specified in ~/.kube/config. Call with ignore-not-found=true to ignore resource not found errors during deletion.
Expand Down
587 changes: 587 additions & 0 deletions allain.patch
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file should be removed from your commit

Large diffs are not rendered by default.

42 changes: 40 additions & 2 deletions bundle/manifests/oran-o2ims.clusterserviceversion.yaml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

21 changes: 21 additions & 0 deletions config/manager/inventory.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
apiVersion: o2ims.oran.openshift.io/v1alpha1
kind: Inventory
metadata:
labels:
app.kubernetes.io/created-by: oran-o2ims
app.kubernetes.io/instance: o2ims
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/part-of: oran-o2ims
name: inventory
namespace: oran-o2ims
spec:
alarmSubscriptionServerConfig:
enabled: false
deploymentManagerServerConfig:
enabled: true
cloudId: 1122-3444-5555 # your clouldID / globalCloudId
ingressHost: # your IngressHost
metadataServerConfig:
enabled: true
resourceServerConfig:
enabled: true
6 changes: 3 additions & 3 deletions config/manager/kustomization.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,14 @@ kind: Kustomization
resources:
- manager.yaml
- env-config.yaml

- inventory.yaml
generatorOptions:
disableNameSuffixHash: true

images:
- name: controller
newName: quay.io/openshift-kni/oran-o2ims-operator
newTag: 4.16.0
newName: quay.io/mmorency0/oran-o2ims
newTag: latest
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to be reverted



# This replacement copies the controller image into the `IMAGE` environment variable of the pod,
Expand Down
8 changes: 8 additions & 0 deletions config/rbac/role.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -250,6 +250,14 @@ rules:
- get
- patch
- update
- apiGroups:
- operator.openshift.io
resources:
- ingresscontrollers
verbs:
- get
- list
- watch
- apiGroups:
- policy.open-cluster-management.io
resources:
Expand Down
33 changes: 24 additions & 9 deletions internal/controllers/inventory_controller.go
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ import (
"sigs.k8s.io/controller-runtime/pkg/predicate"
)

//+kubebuilder:rbac:groups=operator.openshift.io,resources=ingresscontrollers,verbs=get;list;watch
//+kubebuilder:rbac:groups=authentication.k8s.io,resources=tokenreviews,verbs=create
//+kubebuilder:rbac:groups=authorization.k8s.io,resources=subjectaccessreviews,verbs=create
//+kubebuilder:rbac:groups=o2ims.oran.openshift.io,resources=inventories,verbs=get;list;watch;create;update;patch;delete
Expand Down Expand Up @@ -428,7 +429,7 @@ func (t *reconcilerTask) setupOAuthClient(ctx context.Context) (*http.Client, er
}

// registerWithSmo sends a message to the SMO to register our identifiers and URL
func (t *reconcilerTask) registerWithSmo(ctx context.Context) error {
func (t *reconcilerTask) registerWithSmo(ctx context.Context, ingressHost string) error {
// Retrieve the local cluster id value. It appears to always be identified as "version" in its metadata
clusterId, err := utils.GetClusterId(ctx, t.client, utils.ClusterVersionName)
if err != nil {
Expand All @@ -443,7 +444,7 @@ func (t *reconcilerTask) registerWithSmo(ctx context.Context) error {
data := utils.AvailableNotification{
GlobalCloudId: t.object.Spec.CloudId,
OCloudId: clusterId,
ImsEndpoint: fmt.Sprintf("https://%s/o2ims-infrastructureInventory/v1", t.object.Spec.IngressHost),
ImsEndpoint: fmt.Sprintf("https://%s/o2ims-infrastructureInventory/v1", ingressHost),
}

body, err := json.Marshal(data)
Expand All @@ -466,7 +467,7 @@ func (t *reconcilerTask) registerWithSmo(ctx context.Context) error {
}

// setupSmo executes the high-level action set register with the SMO and set up the related conditions accordingly
func (t *reconcilerTask) setupSmo(ctx context.Context) (err error) {
func (t *reconcilerTask) setupSmo(ctx context.Context, ingressHost string) (err error) {
if t.object.Spec.SmoConfig == nil {
meta.SetStatusCondition(
&t.object.Status.DeploymentsStatus.Conditions,
Expand All @@ -481,7 +482,7 @@ func (t *reconcilerTask) setupSmo(ctx context.Context) (err error) {
}

if !utils.IsSmoRegistrationCompleted(t.object) {
err = t.registerWithSmo(ctx)
err = t.registerWithSmo(ctx, ingressHost)
if err != nil {
t.logger.ErrorContext(
ctx, "Failed to register with SMO.",
Expand Down Expand Up @@ -526,16 +527,30 @@ func (t *reconcilerTask) run(ctx context.Context) (nextReconcile ctrl.Result, er
// Set the default reconcile time to 5 minutes.
nextReconcile = ctrl.Result{RequeueAfter: 5 * time.Minute}

// Determine our ingress domain
ingressHost := t.object.Spec.IngressHost
if ingressHost == "" {
ingressHost, err = utils.GetIngressDomain(ctx, t.client)
if err != nil {
t.logger.ErrorContext(
ctx,
"Failed to get ingress domain.",
slog.String("error", err.Error()))
return
}
ingressHost = "o2ims." + ingressHost
}

// Register with SMO (if necessary)
err = t.setupSmo(ctx)
err = t.setupSmo(ctx, ingressHost)
if err != nil {
return
}

// Create the needed Ingress if at least one server is required by the Spec.
if t.object.Spec.MetadataServerConfig.Enabled || t.object.Spec.DeploymentManagerServerConfig.Enabled ||
t.object.Spec.ResourceServerConfig.Enabled || t.object.Spec.AlarmSubscriptionServerConfig.Enabled {
err = t.createIngress(ctx)
err = t.createIngress(ctx, ingressHost)
if err != nil {
t.logger.ErrorContext(
ctx,
Expand Down Expand Up @@ -874,7 +889,7 @@ func (t *reconcilerTask) deployServer(ctx context.Context, serverName string) (u
},
}

deploymentContainerArgs, err := utils.GetServerArgs(ctx, t.client, t.object, serverName)
deploymentContainerArgs, err := utils.GetServerArgs(t.object, serverName)
if err != nil {
err2 := t.updateORANO2ISMUsedConfigStatus(
ctx, serverName, deploymentContainerArgs,
Expand Down Expand Up @@ -1060,7 +1075,7 @@ func (t *reconcilerTask) createService(ctx context.Context, resourceName string)
return nil
}

func (t *reconcilerTask) createIngress(ctx context.Context) error {
func (t *reconcilerTask) createIngress(ctx context.Context, ingressHost string) error {
t.logger.InfoContext(ctx, "[createIngress]")
// Build the Ingress object.
ingressMeta := metav1.ObjectMeta{
Expand All @@ -1074,7 +1089,7 @@ func (t *reconcilerTask) createIngress(ctx context.Context) error {
ingressSpec := networkingv1.IngressSpec{
Rules: []networkingv1.IngressRule{
{
Host: t.object.Spec.IngressHost,
Host: ingressHost,
IngressRuleValue: networkingv1.IngressRuleValue{
HTTP: &networkingv1.HTTPIngressRuleValue{
Paths: []networkingv1.HTTPIngressPath{
Expand Down
3 changes: 2 additions & 1 deletion internal/controllers/utils/constants.go
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,8 @@ var (

// Default values for backend URL and token:
const (
defaultBackendURL = "https://kubernetes.default.svc"
defaultApiServerURL = "https://kubernetes.default.svc"
defaultSearchApiURL = "https://search-search-api.open-cluster-management.svc.cluster.local:4010"
defaultBackendTokenFile = "/var/run/secrets/kubernetes.io/serviceaccount/token" // nolint: gosec // hardcoded path only
defaultBackendCABundle = "/var/run/secrets/kubernetes.io/serviceaccount/ca.crt" // nolint: gosec // hardcoded path only
defaultServiceCAFile = "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" // nolint: gosec // hardcoded path only
Expand Down
84 changes: 19 additions & 65 deletions internal/controllers/utils/utils.go
Original file line number Diff line number Diff line change
Expand Up @@ -342,78 +342,35 @@ func GetBackendTokenArg(backendToken string) string {
return fmt.Sprintf("--backend-token-file=%s", defaultBackendTokenFile)
}

// getACMNamespace will determine the ACM namespace from the multiclusterengine object.
//
// multiclusterengine object sample:
//
// apiVersion: multicluster.openshift.io/v1
// kind: MultiClusterEngine
// metadata:
// labels:
// installer.name: multiclusterhub
// installer.namespace: open-cluster-management
func getACMNamespace(ctx context.Context, c client.Client) (string, error) {
// Get the multiclusterengine object.
multiClusterEngine := &unstructured.Unstructured{}
multiClusterEngine.SetGroupVersionKind(schema.GroupVersionKind{
Group: "multicluster.openshift.io",
Kind: "MultiClusterEngine",
// getIngressDomain will determine the network domain of the default ingress controller
func GetIngressDomain(ctx context.Context, c client.Client) (string, error) {
ingressController := &unstructured.Unstructured{}
ingressController.SetGroupVersionKind(schema.GroupVersionKind{
Group: "operator.openshift.io",
Kind: "IngressController",
Version: "v1",
})
err := c.Get(ctx, client.ObjectKey{
Name: "multiclusterengine",
}, multiClusterEngine)
Name: "default",
Namespace: "openshift-ingress-operator",
}, ingressController)

if err != nil {
oranUtilsLog.Info("[getACMNamespace] multiclusterengine object not found")
return "", fmt.Errorf("multiclusterengine object not found")
oranUtilsLog.Info(fmt.Sprintf("[getIngressDomain] default ingress controller object not found, error: %s", err))
return "", fmt.Errorf("default ingress controller object not found: %w", err)
}

// Get the ACM namespace by looking at the installer.namespace label.
multiClusterEngineMetadata := multiClusterEngine.Object["metadata"].(map[string]interface{})
multiClusterEngineLabels, labelsOk := multiClusterEngineMetadata["labels"]
spec := ingressController.Object["spec"].(map[string]interface{})
domain, ok := spec["domain"]

if labelsOk {
acmNamespace, acmNamespaceOk := multiClusterEngineLabels.(map[string]interface{})["installer.namespace"]

if !acmNamespaceOk {
return "", fmt.Errorf("multiclusterengine labels do not contain the installer.namespace key")
}
return acmNamespace.(string), nil
}

return "", fmt.Errorf("multiclusterengine object does not have expected labels")
}

// getSearchAPI will dynamically obtain the search API.
func getSearchAPI(ctx context.Context, c client.Client, inventory *inventoryv1alpha1.Inventory) (string, error) {
// Find the ACM namespace.
acmNamespace, err := getACMNamespace(ctx, c)
if err != nil {
return "", err
}

// Split the Ingress to obtain the domain for the Search API.
// searchAPIBackendURL example: https://search-api-open-cluster-management.apps.lab.karmalabs.corp
// IngressHost example: o2ims.apps.lab.karmalabs.corp
// Note: The domain could also be obtained from the spec.host of the search-api route in the
// ACM namespace.
ingressSplit := strings.Split(inventory.Spec.IngressHost, ".apps")
if len(ingressSplit) != 2 {
return "", fmt.Errorf("the searchAPIBackendURL could not be obtained from the IngressHost. " +
"Directly specify the searchAPIBackendURL in the Inventory CR or update the IngressHost")
if ok {
return domain.(string), nil
}
domain := ".apps" + ingressSplit[len(ingressSplit)-1]

// The searchAPI is obtained from the "search-api" string and the ACM namespace.
searchAPI := "https://" + "search-api-" + acmNamespace + domain

return searchAPI, nil
return "", fmt.Errorf("default ingress controller does not have expected 'spec.domain' attribute")
}

func GetServerArgs(ctx context.Context, c client.Client,
inventory *inventoryv1alpha1.Inventory,
serverName string) (result []string, err error) {
func GetServerArgs(inventory *inventoryv1alpha1.Inventory, serverName string) (result []string, err error) {
// MetadataServer
if serverName == InventoryMetadataServerName {
result = slices.Clone(MetadataServerArgs)
Expand All @@ -429,10 +386,7 @@ func GetServerArgs(ctx context.Context, c client.Client,
if serverName == InventoryResourceServerName {
searchAPI := inventory.Spec.ResourceServerConfig.BackendURL
if searchAPI == "" {
searchAPI, err = getSearchAPI(ctx, c, inventory)
if err != nil {
return nil, err
}
searchAPI = defaultSearchApiURL
}

result = slices.Clone(ResourceServerArgs)
Expand Down Expand Up @@ -469,7 +423,7 @@ func GetServerArgs(ctx context.Context, c client.Client,
// API server of the cluster:
backendURL := inventory.Spec.DeploymentManagerServerConfig.BackendURL
if backendURL == "" {
backendURL = defaultBackendURL
backendURL = defaultApiServerURL
}

// Add the backend and token args:
Expand Down
Loading