Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: multi-arch must-gather image #1494

Draft
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

mateusoliveira43
Copy link
Contributor

@mateusoliveira43 mateusoliveira43 commented Aug 15, 2024

Why the changes were made

Add support for multi-arch builds of OADP must-gather image

Related to #1487

How to test the changes made

Go to must-gather folder and change PLATFORM in Makefile and run make docker-build. Build should succeed

Check that tools removed from Dockerfile are not needed

Signed-off-by: Mateus Oliveira <[email protected]>
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Aug 15, 2024
Copy link

openshift-ci bot commented Aug 15, 2024

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

Copy link

openshift-ci bot commented Aug 15, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: mateusoliveira43

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 15, 2024
@@ -13,21 +13,32 @@ RUN curl --location --output velero.tgz https://github.com/openshift/velero/arch
curl --location --output restic.tgz https://github.com/openshift/restic/archive/refs/heads/${RESTIC_BRANCH}.tar.gz && \
tar -xzvf restic.tgz && cd restic-${RESTIC_BRANCH} && \
CGO_ENABLED=0 GOOS=${TARGETOS:-linux} GOARCH=${TARGETARCH} go build -a -mod=mod -ldflags '-extldflags "-static"' -o /restic github.com/restic/restic/cmd/restic && \
cd .. && rm -rf restic.tgz restic-${RESTIC_BRANCH}
cd .. && rm -rf restic.tgz restic-${RESTIC_BRANCH} && \
curl --location --output oc.tgz https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable-4.16/openshift-client-linux-${TARGETARCH}-rhel9.tar.gz && \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here you could use stable (which is currently 4.16), this way we won't need to update once 4.17 is there:

curl --location --output oc.tgz https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-client-linux-${TARGETARCH}-rhel9.tar.gz

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not like automatically updates

how does prod handles updates for oc @rayfordj ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the moment, manually, until we have an oc available from an el9/ubi9 base image at which time we'll then resume updating via follow_tag as the base image is updated. I'm guessing this likely won't happen until >=v4.17.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I lean towards stable to ensure it isn't later forgotten and doesn't rot as has been observed to occur over the years across our repos. I've no objection for upstream to use either a) latest stable or b) latest versioned (such as, stable-v4.16). However, the former will continue to be updated over releases, while the latter will eventually stop being maintained. Regardless, it isn't something that will be carried downstream.
😬

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could copy oc from container image

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed to stable

there is a handy image for that, Tiger? ones I found, where not in registry.access, so needed auth

must-gather/Dockerfile Outdated Show resolved Hide resolved
@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 25, 2024
FROM registry.access.redhat.com/ubi9-minimal:latest

RUN echo -ne "[centos-9-appstream]\nname = CentOS 9 (RPMs) - AppStream\nbaseurl = https://mirror.stream.centos.org/9-stream/AppStream/x86_64/os/\nenabled = 1\ngpgcheck = 0" > /etc/yum.repos.d/centos-9-appstream.repo
RUN microdnf -y install rsync tar gzip graphviz findutils
RUN microdnf -y install tar
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants