-
Notifications
You must be signed in to change notification settings - Fork 404
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Cloudstack provider support for Flatcar #1064
Conversation
Welcome @hrak! |
Hi @hrak. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: hrak The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/ok-to-test |
@@ -24,6 +24,15 @@ | |||
owner: root | |||
group: root | |||
mode: 0644 | |||
when: ansible_os_family != "Flatcar" | |||
|
|||
# For Flatcar, copy the cloudstack support files into the oem partition |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: this comment could probably be left out, the block is self-explanatory.
@mboersma thanks for assigning me, but perhaps we should get something more knowledgeable about Flatcar internals to look into this one. Maybe @jepio or @pothos would find a minute to have a look? @hrak thanks for the PR. As you mentioned Flatcar's own Cloudstack image, I wonder, could we customize I'm talking about these: $ git grep flatcar-linux.net
packer/ova/flatcar.json: "iso_checksum": "https://{{env `FLATCAR_CHANNEL`}}.release.flatcar-linux.net/amd64-usr/{{env `FLATCAR_VERSION`}}/flatcar_production_iso_image.iso.DIGESTS.asc",
packer/ova/flatcar.json: "iso_url": "https://{{env `FLATCAR_CHANNEL`}}.release.flatcar-linux.net/amd64-usr/{{env `FLATCAR_VERSION`}}/flatcar_production_iso_image.iso",
packer/qemu/qemu-flatcar.json: "iso_checksum": "https://{{env `FLATCAR_CHANNEL`}}.release.flatcar-linux.net/amd64-usr/{{env `FLATCAR_VERSION`}}/flatcar_production_iso_image.iso.DIGESTS.asc",
packer/qemu/qemu-flatcar.json: "iso_url": "https://{{env `FLATCAR_CHANNEL`}}.release.flatcar-linux.net/amd64-usr/{{env `FLATCAR_VERSION`}}/flatcar_production_iso_image.iso",
packer/raw/raw-flatcar.json: "iso_checksum": "https://{{env `FLATCAR_CHANNEL`}}.release.flatcar-linux.net/amd64-usr/{{env `FLATCAR_VERSION`}}/flatcar_production_iso_image.iso.DIGESTS.asc",
packer/raw/raw-flatcar.json: "iso_url": "https://{{env `FLATCAR_CHANNEL`}}.release.flatcar-linux.net/amd64-usr/{{env `FLATCAR_VERSION`}}/flatcar_production_iso_image.iso", Could you give it a try? |
Flatcar does not supply provider-specific iso images, only VM images. What they do is build a VM image based on the iso and then build a OEM-specific ebuild into the oem partition in the 'image_to_vm' build stage (see here) The ebuild containing the oem-specific files can be found here. The files have not really changed in a long time. So actually, this does not just apply to Cloudstack, but to any provider supported by image-builder. Which makes me wonder how people actually currently use Flatcar based capi images on any of the supported cloud providers, since i don't see oem-azure, oem-gce etc. referenced anywhere. btw, building based on an image instead of an ISO could be an option if the |
TIL, thanks!
At least on Azure, we use Azure-specific Flatcar images as a base, this is why there is no need for special handling of oem I think.
Interesting. So then you would just point it to cloudstack disk image and that's it, correct? That seem like a nice win. Could you give it a try or at least leave a comment in #924 about this please? So we can collect more use cases for this, so it's maybe easier to prioritize to work on at some point. Otherwise, I think the PR looks good then. |
@hrak I'm not excited about copying files out of the Flatcar repos and keeping them here. Not that the files change often, but if they would no one would remember to keep them in sync. Here are two approaches that I think would be better:
|
Using Packer, and specially in the way it's done here with the ISO "bootstrap" phase makes things more complicated than needed^^ As was already said, I also think the best options would be to start the right image or at least pass the |
I'm back, and a little wiser after opening a related issue on the Flatcar project. It appears cloud-init on Flatcar does not support Jinja templates, which cluster-api requires for cloud-init, so its a dead end road. Someone pointed out to me in that issue that ignition is the way to go for cluster-api + Flatcar. This is working for me now, including metadata and installing ssh keys! Description of the process can be found in kubernetes-sigs/cluster-api-provider-cloudstack#216 (comment) Based on my findings, what could be implemented as a cloudstack provider for flatcar is those two drop-ins that i now send using ignition. They override the metadata provider name to be |
Thanks, it looks like cloudstack could be switched to afterburn (coreos-metadata). A fix for the same issue was once necessary for openstack https://github.com/flatcar/bootengine/pull/21/files. Could you PR that for openstack? |
Indeed, the only thing is that using This was recently addressed for openstack, which had a similar problem. For cloudstack this is still an open issue.
It doesn't look like this is needed for Cloudstack, i don't have any issues with the hostname being set with Flatcar using afterburn on Cloudstack. |
@jepio Looking into this some more now. It looks like the hostname is currently being set by whatever is supplied by DHCP, which works, but i was wondering if there is any advantage in doing this same Openstack approach for Cloudstack? |
What I intended to suggest was opening a PR to Flatcar to improve support for cloudstack, so that the OEM support does not need to be fixed downstream in image-builder. That way it works for anyone using cloudstack. The openstack approach just seems similar to me, so you could reuse some of the same units/scripts after extending them a bit. |
Checking in - Is this still a relevant PR? |
Sorry for the delay, no i think this can be closed, we have a solution now with some systemd drop-ins, and i will explore fixing this upstream. |
What this PR does / why we need it:
This PR adds Cloudstack provider support for Flatcar Linux. Flatcar requires some support scripts to be present in the oem partition (/usr/share/oem) to support the Cloudstack metadata service. These files were taken from Flatcar's own Cloudstack image
Additional context
Add any other context for the reviewers