Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for PVH boot protocol #11

Merged
merged 4 commits into from
Mar 13, 2020
Merged

Conversation

aljimenezb
Copy link
Contributor

Adding support for parsing the PVH entry point address if encoded in the kernel ELF binary, and returning it as an additional field in KernelLoaderResult. This allows the VMM to choose which boot protocol to use.

These changes have been tested alongside with patches for Intel Cloud Hypervisor that successfully boot a guest using the PVH entry point.

The PR is marked as WIP since there are still a few open questions:

  • Some definitions from Xen's xen/include/public/arch-x86/hvm/start_info.h (this header also exists in Linux), and comments were manually copied into a new file start_info.rs. When using bindgen to generate start_info.rs from the C header, the output file is not very legible so I opted for the manual approach to keep it readable. Is this acceptable or should bindgen be used?

  • I took the function align_up() from the x86_64 crate (https://docs.rs/x86_64/0.9.2/x86_64/index.html) and slightly modified it. The goal was to avoid adding a new dependency to an external crate. Is this appropriate?

  • Copyright messages: I'm unsure of the proper way, so I just pasted the copyright notice at the top of every file I touched, regardless of the length/importance of the change. What are the guidelines on how to do this correctly?

Closes #3

@aljimenezb aljimenezb changed the title [WIP] Add support for PVH boot protocol Add support for PVH boot protocol Feb 28, 2020
@aljimenezb aljimenezb force-pushed the pvh-boot branch 2 times, most recently from a08f097 to 26cfe46 Compare March 4, 2020 21:50
@rbradford
Copy link
Collaborator

@bonzini Can you give this a quick look through?

sboeuf
sboeuf previously approved these changes Mar 5, 2020
Copy link

@sboeuf sboeuf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks clean to me. I'm not an expert though and I'd feel more comfortable if @bonzini and others could review it too.

src/loader/elf.rs Show resolved Hide resolved
@bonzini
Copy link
Member

bonzini commented Mar 5, 2020

Looks good to me. The main issue is that it does not cover the task of preparing the PVH parameters struct, I'll open an issues about that.

@aljimenezb
Copy link
Contributor Author

Looks good to me. The main issue is that it does not cover the task of preparing the PVH parameters struct, I'll open an issues about that.

Right, I looked to see if there was some common place to add this code (and also setting up initial vCPU registers), since these areas are very similar in Firecracker and Intel Cloud Hypervisor (likely inherited from crosvm). I thought the vmm-vcpu crate would be a good place for this but found it empty. Looking at the issue you opened now..

@andreeaflorescu
Copy link
Member

We also need to add tests for the newly added feature. For some examples, you can check the existing pipeline: https://github.com/rust-vmm/linux-loader/blob/master/.buildkite/pipeline.yml

It's not a hard block, but I would prefer the PR that uses the rust-vmm-ci pipeline to be merged first because it also updates the container (which updates the rust version & adds tests for aarch64): #14

@aljimenezb
Copy link
Contributor Author

@andreeaflorescu I amended the last commit to use an ELF binary with an additional note header that is unrelated to PVH. The goal is to increase coverage by exercising the code path that ignores such notes. Still, the coverage test fails because coverage drops by 0.20% for x86 and 1% for arm.

I believe the minor drops in coverage are due to the start_info.rs definitions which do not have layout tests, but I am not sure how the tests work exactly. The previous test left some files in the workspace where I could examine the report and see the coverage, but after updating to the rust-vmm-ci pipeline I can't find those anymore. Any advice on what I am doing wrong/how to move forward?

@aljimenezb aljimenezb force-pushed the pvh-boot branch 2 times, most recently from 334e311 to 2f51246 Compare March 9, 2020 15:13
@aljimenezb
Copy link
Contributor Author

Added layout tests (autogenerated by bindgen) for the hvm_start_info and hvm_memmap_table_entry structs. Also changed the name from hvm_mmap_table_entry to hvm_memmap_table_entry to keep the same name as in the Xen header file (https://xenbits.xenproject.org/docs/unstable/hypercall/x86_64/include,public,arch-x86,hvm,start_info.h.html)

To investigate why the coverage check fails with a drop of 0.20% for x86 and 1% for arm, I modified test_coverage.py in my local repo so the kcov_output directory is not cleaned up after the test runs. I can see that in the x86 case, there are only 3 lines I added that are not covered.

src/loader/mod.rs Outdated Show resolved Hide resolved
@aljimenezb aljimenezb force-pushed the pvh-boot branch 2 times, most recently from 165d4f4 to e106a2c Compare March 9, 2020 19:01
@aljimenezb
Copy link
Contributor Author

I found an ARM box and ran the coverage tests with the instrumented test_coverage.py so that I could look at the reports afterwards.

Before applying changes in this PR:
Instrumented lines: 1039
Executed lines: 737
Code covered: 70.9%

Filename Coverage % Covered lines Uncovered lines Executable lines
/linux-loader/src/loader/elf.rs 0.0% 0 30 30
/linux-loader/src/loader/mod.rs 53.1% 43 38 81
/linux-loader/src/loader/bootparam.rs 74.1% 628 219 847
/linux-loader/src/cmdline/mod.rs 81.5% 66 15 81

=======================================================
After applying changes:
Instrumented lines: 1062
Executed lines: 743
Code covered: 70.0%

Filename Coverage % Covered lines Uncovered lines Executable lines
/linux-loader/src/loader/start_info.rs 0.0% 0 16 16
/linux-loader/src/loader/elf.rs 0.0% 0 34 34
/linux-loader/src/loader/mod.rs 57.1% 48 36 84
/linux-loader/src/loader/bootparam.rs 74.1% 628 219 847
/linux-loader/src/cmdline/mod.rs 82.7% 67 14 81

(Ignoring the spurious change in /linux-loader/src/cmdline/mod.rs, which is caused by an empty line which is now reported as covered)

The results above show that the drop in coverage for the aarch64 case is caused primarily by the new data structures in start_info.rs and elf.rs which are not utilized in the aarch64 code.
Given that as it is now, PVH is purely an x86 specification, we can restrict the inclusion of the start_info structures to the x86 builds.

@andreeaflorescu
Copy link
Member

Given that as it is now, PVH is purely an x86 specification, we can restrict the inclusion of the start_info structures to the x86 builds.

Sounds good, let's compile out what is not used on aarch64. If the coverage still fails for weird reasons (it does that from time to time), you can also decrease the coverage.

@aljimenezb
Copy link
Contributor Author

Sounds good, let's compile out what is not used on aarch64. If the coverage still fails for weird reasons (it does that from time to time), you can also decrease the coverage.

Done. I tried my best to avoid decreasing the coverage, and luckily the numbers worked out after removing start_info.rs from the aarch64 case, and adding more unit tests on x86. I actually had to increase the coverage value for x86 on my last commit. Please let me know if there is anything else that is needed before merging.

@alxiord
Copy link
Member

alxiord commented Mar 10, 2020

I would very much like to see #24 merged before this one, so that we can work on code that's clearly structured per architecture and kernel format (this code would then go to x86/elf afaict) (and we shouldn't have confusing coverage per-arch anymore). However, if it's functionally merge-able, I won't block it, I'll rebase #24 over instead.

@aljimenezb
Copy link
Contributor Author

I would very much like to see #24 merged before this one, so that we can work on code that's clearly structured per architecture and kernel format (this code would then go to x86/elf afaict) (and we shouldn't have confusing coverage per-arch anymore). However, if it's functionally merge-able, I won't block it, I'll rebase #24 over instead.

Unless anyone else has objections, I believe this PR is ready to merge, but I'm ok with whatever you decide is best. As you mentioned, I think the PVH code probably fits best in x86/elf given that is a special case of ELF boot. Not sure that a dedicated x86_64/pvh directory is needed...

@alxiord
Copy link
Member

alxiord commented Mar 10, 2020

I think the PVH code probably fits best in x86/elf given that is a special case of ELF boot. Not sure that a dedicated x86_64/pvh directory is needed...

After actually looking at the code, I realize that it isn't. I will update the description.

// Rust definitions needed to enable PVH boot protocol
#[repr(C)]
#[derive(Debug, Copy, Clone, Default)]
pub struct hvm_start_info {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

disclaimer I don't really know how this works.

I noticed that the structures in this file aren't actually used. Is that intended? How are you suppose to use them? I noticed that we do export them, so I was wondering if we can have some sort of example with them.

Copy link
Contributor Author

@aljimenezb aljimenezb Mar 11, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The structures are meant to be exported and used by the VMM implementation when writing the necessary info (cmdline address, ACPI RSDP, memory maps, etc) to guest memory to implement the specific boot protocol. A current example is struct boot_params, which is also defined in linux-loader and then imported by the VMMs. If you were to try and establish a correspondence between the two, then:
hvm_start_info <==> boot_params
hvm_memmap_table_entry <==> boot_e820_entry

As Paolo mentioned before, PVH uses smaller set of boot parameters than the Linux boot protocol; you can see by the definitions that hvm_start_info is much more concise than boot_params.

Currently, VMMs like Intel Cloud Hypervisor and Firecracker use the Linux boot protocol and struct boot_params. The goal of this change is to allow them to use the PVH protocol as well, since it is a standard Linux interface and boots as fast (or faster) than the direct ELF boot method currently in use. I have an open PR on Intel Cloud Hypervisor and will create one for Firecracker soon.

Right now, for both direct ELF boot and PVH, the VMM is responsible for putting the data structures in guest memory and initializing the vCPU state as required by the specific protocol. Paolo opened #15 to add functionality that takes care of those tasks as well on this crate. IIUC, that would provide the sort of use case example that is missing for both boot_params and hvm_start_info and friends... And of course I can also add comments on start_info.rs if you think that is better.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the really nice explanation. I think it is worth specifying this somewhere so that noobs like myself can have it easier.

@andreeaflorescu
Copy link
Member

@bonzini can you take a look at this PR?

src/loader/elf.rs Outdated Show resolved Hide resolved
src/loader/start_info.rs Show resolved Hide resolved
src/loader/mod.rs Outdated Show resolved Hide resolved
src/loader/mod.rs Outdated Show resolved Hide resolved
src/loader/mod.rs Outdated Show resolved Hide resolved
Comment on lines +360 to +379
// The PVH entry point is a 32-bit address, so the descriptor field
// must be capable of storing all such addresses.
if (nhdr.n_descsz as usize) < mem::size_of::<u32>() {
return Err(Error::InvalidPvhNote);
}

let mut pvh_addr_bytes = [0; mem::size_of::<u32>()];

// Read 32-bit address stored in the PVH note descriptor field.
kernel_image
.read_exact(&mut pvh_addr_bytes)
.map_err(|_| Error::ReadNoteHeader)?;

Ok(Some(GuestAddress(
u32::from_le_bytes(pvh_addr_bytes).into(),
)))
Copy link
Contributor Author

@aljimenezb aljimenezb Mar 12, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed this section slightly since the previous revision to make it more generic. The previous code works correctly for the primary use case: a 64-bit Linux ELF header which encodes the PVH entry point address in an 8 byte field, but it did not work with ELF binaries that encode the address using a 4 byte field (e.g. rust-hypervisor-firmware has an WIP PR that does this).

The PVH specification technically does not define the size of the note header descriptor field, only that it must contain a 32-bit address. So regardless of the size of the containing field in the note header, a little endian architecture will store the 32-bit address in the first 4 bytes of the buffer and I changed the code to retrieve that value.
Please let me know if you see any issues with this approach.

Define ELF Note header structures and necessary for parsing
the PVH entry point address encoded in the kernel ELF header.

Generated the elf.rs file again by running bindgen:

bindgen --with-derive-default elf.h > elf.rs

From upstream linux include/uapi/linux/elf.h at commit:
3cc6e2c599cdca573a8f347aea5da4c855ff5a78
and then edited to eliminate unnecessary definitions, add comments,
and relocate definitions and tests for clarity.

Signed-off-by: Alejandro Jimenez <[email protected]>
Introduce the layout and define the start_info, module
list and memory map table entry structures used by the
PVH boot protocol.

The hvm_start_info structure is akin to bootparams in
Linux boot protocol, specifying the small set of
parameters required by the PVH protocol.

Signed-off-by: Alejandro Jimenez <[email protected]>
Parse the ELF header looking for a PVH Note section and retrieve
the encoded PVH entry point address if there is one. The entry
point address is returned in KernelLoaderResult alongside the
typical ELF entry point used for direct boot.
A VMM implementing KernelLoader can now determine whether a
PVH entry point is available and choose to configure its guests
to boot using either PVH or Linux 64-bit protocol.

Signed-off-by: Alejandro Jimenez <[email protected]>
Add test cases to verify the  functionality that parses
the ELF Note header to look for a PVH entry point address
if one is encoded.

Parse a minimal ELF binary that encodes a predefined address
of 0x1e1fe1f, and verify that the same value is read. Also
test the case in which a note header is present but no PVH
entry point is encoded, as well as a case where the PVH entry
address is encoded in the note header using a field of
incorrect size.

The minimal ELF source code (elfnote.S):

	#define ELFNOTE_START(name, type, flags)	\
	.pushsection .note.name, flags, @note   ;	\
	  .balign 4                             ;	\
	  .long 2f - 1f         /* namesz */    ;	\
	  .long 4484f - 3f      /* descsz */    ;	\
	  .long type                            ;	\
	1:.asciz #name                          ;	\
	2:.balign 4                             ;	\
	3:

	#define ELFNOTE_END				\
	4484:.balign 4                          ;	\
	.popsection                             ;

	#define ELFNOTE(name, type, desc)		\
	        ELFNOTE_START(name, type, "a")		\
	                desc                    ;	\
	        ELFNOTE_END

	#define XEN_ELFNOTE_PHYS32_ENTRY	18
	#define NT_VERSION			1

	ELFNOTE(dummy, NT_VERSION, .quad 0xcafecafe)
	ELFNOTE(PVHNote, XEN_ELFNOTE_PHYS32_ENTRY, .quad 0x1e1fe1f)

	.section        ".text","ax"
	.global _start
	_start:

Built with:
$ gcc elfnote.S -s -nostdlib -o test_elfnote.bin

The elfnote.S source above is modified to generate the binaries
for the rest of the test cases.

Signed-off-by: Alejandro Jimenez <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support PVH boot protocol
6 participants