Skip to content

OE4T Meeting Notes 2022 03 10

Dan Walkes edited this page Mar 19, 2022 · 3 revisions

Video

https://youtu.be/T0heKWFsNao

Attendees

9

Topics

  • Jetpack 4.6.1
    • Have pulled in 32.7.1 kernel tree from NVIDIA repos, have the unified version of that locally. Will ensure it’s got the right BSP in it.
    • Will start working on pulling together stuff and getting a branch updated. Will go into master, not sure what to do about dunfell.
    • NVIDIA still says 32.6.1 is the current stable version, not sure whether they will keep that going forward.
    • Have had 32.6 on dunfell. If people want it in Dunfell, open to suggestions.
    • Dunfell has a couple more years of support, harder to keep things up to date security-wise. One factor with mender and OTA updates, 3rd party venders can’t move quickly. Mender won’t be able to change to kirkstone until the summer. Might be a reason to move to dunfell.
    • NVIDIA has historically not been great about maintaining compatibility across BSP releases. If a change winds up not being too intrusive it’s likely worth backporting to 32.7.1
    • Didn’t seem like major stuff in 32.7.1 except support for two new machines
    • Update since the meeting - now on the master branch of meta-tegra and the demo distro. See release notes at https://github.com/OE4T/meta-tegra/wiki/L4T-R32.7.1-Notes
  • Weston 10 in Master
    • Lost the patches from weston 9.
    • Got weston backend working with GBM
    • The portion where the client EGL loads to pass surfaces to Weston is still broken.
    • Planning to port a couple of EGL streams patches, client will continue to use EGL streams until we get support for Weston 10 from NVIDIA for the new buffer types.
    • Still will have the GBM backend for Weston. For clients who don’t need EGL Wayland they can pass EGL buffers to compositor.
    • WIP branch on OE4Ts fork of meta-tegra which Matt did.
    • Kurt posted his fork which is more up to date, has updates for the backend. Still working on the client side. Will keep updates in github.
  • RAUC
    • Probably not aging well, not keeping it up to date.
    • Tim will look into whether there’s already a community repo for RAUC.
    • Could move to meta-tegra-community if someone wants to take it on.
  • Secureboot plus LUKS encryption questions
    • See related thread at https://forums.developer.nvidia.com/t/postmortem-jetson-xavier-agx-will-not-get-past-boot-rom-after-burning-pkc-sbk-kek256/208426 and scripts at https://github.com/moto-timo/secureboot-tegra
    • Bricked a board, challenges working with an AGX Xavier devkit
    • Originally used on Debian 11, so maybe the binaries are broken and only work on Ubuntu.
    • Created a 3K RSA key and not a 2K which may also be an issue
    • Brick the system because you write into spaces where there are items NVIDIA hasn’t disclosed or else the boot ROM doesn’t work.
    • Historically secureboot has been a problem. Suggestion when starting with a new module and for handling discrepancies and release issues between modules: generate the fuse package with the keys but not with the product mode fuse burned. If possible to flash and boot it with binaries which have been signed and encrypted, then boot up and look at fuse values published in sysfs. Then from Linux set the production level fuse. Matt hasn’t run into the problem described in the thread before, but also hasn’t tried AGX Xavier.
    • Boot flow encryption - flash the SDK and PKC, setup EKB partition, use that with the keystore application to create the passphrase. Without a TKM holding the LUKS key this is probably the most secure it can be.
    • NVIDIA now within their stock trusty implementation has some LUKS support, with some SSK which you can leave around inside the crypto engine as the system boots. Principle is essentially the same. Use KEK or the SSK to form your passphrase, then make sure that passphrase is used only enough to get into the LUKS setup.
  • License Compliance
    • Need to also hand off the ability to rebuild the product in some way (for GPL3). Not sure what needs to be done yet based on licensing requirements. Interested in how this is possible with NVIDIA.
    • Turn off GPL3 items as one way to handle this.
    • Have a stripped down version of yocto build setup, generate a recipe which allows all GPLd packages to be built. Publish on an open source website a tarball with meta layers and the recipe, instructions for building that. NVIDIA is proprietary but not GPL licensed, don’t need to provide a mechanism for building. LGPL needs to be able to drop their own builds in. Discuss with appropriate legal counsel.
    • Tools within yocto for doing licensing compliance. Matt has found it simpler to strip out all the stuff which shouldn’t be shared and share this. Can share some pointers with what this looks like from a customer perspective. Provide tarballs for all git repositories.
    • Kirkstone has SBOM support - software bill of materials, requirements in US for government work. Unified way of disclosing what all the licenses are and where they came from in a standardized and machine readable way. See document at https://events19.linuxfoundation.org/wp-content/uploads/2018/07/OSLS-2019_-Building-on-Builds-YoctoSPDX.pdf
    • Archiver class in yocto - has been a couple years but there was a talk on license compliance which referenced this tool. See talk at https://youtu.be/9wRn-9KhiEI. At some point you must have legal advice.
    • For NVIDIA components all you can do is share the link to the location where it was downloaded.
  • x86 image which incorporates PCIe GPU
    • Curious if anyone has done the work to put nvidia drivers and docker in yocto for x86.
    • Someone discussed with Matt in 2017 - meta-nvidia project, not active.
    • Ping Ilies about stuff he’s done with Clara AGX which use both onboard GPU plus discrete GPU. Not x86 but some things he needs to do to build things for both discrete GPU and CUDA.
    • Don’t want to pull in anything not tegra applicable in meta-tegra, but should be able to reuse some of the stuff we’ve got and maybe some of the work from 2017.
  • Jetpack 5
    • No notification of new early access drop (Only EA releses thus far have been related to the BSP).
  • Kirkstone
    • Upstream python has move to wheels, deprecated distutils, deprecated setup.py install. Removed everything used in openembedded and yocto project. PEP517 is something important to look at. “The world is changing dramatically”
    • Override syntax changes
    • Native tools are no longer shipping the pyc files, removing thrash on sstate.
    • Changes on how bitbake does things related to speedup improvements.
    • Rust support - build using Rust. Rust has also hit the python world. Not enough packages which have taken up Rust extensions to see the larger patterns.
    • Re-did the way CUDA compilation works on meta-tegra, can use whatever GCC we get from upstream, own copy of GCC8 used for CUDA compilations.
    • Target kirkstone release branch starting as soon as it completes, probably early April 2022
    • Would like to keep kirkstone as stable as we can going forward, not sure what that means with Jetpack 5.
    • Whether we keep on the same branch or not, not sure. In the past to have multiple BSP versions in the same branch of one layer, subset of machines picking one branch subset of machines working on others. Could potentially do that in Kirkstone. Gets messy to try to support that. If you are using a tx2 or nano you get the JP 4 related L4T. For Xavier and Orins you get the JP 5 release.
    • Limited developer releases from NVIDIA of specific machines have made this more difficult in the past, hopefully this won’t happen with Jetpack 5.
Clone this wiki locally