-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Container for (spack-manager) CUDA GPU Build of Exawind for NERSC Science Platform #575
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ajpowelsnl thanks for doing this and sorry it has taken so long to get a review going.
spack
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like it was an update to the submodule file, and not the spack commit? Is that right? We have a mirror only policy on spack changes so these changes would need to go into mainline spack.
@@ -97,6 +97,9 @@ def is_e4s(): | |||
"perlmutter": MachineData( | |||
lambda: os.environ["NERSC_HOST"] == "perlmutter", "perlmutter-p1.nersc.gov" | |||
), | |||
"containergpucuda": MachineData( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure I like this name. Would we expect this to build any container using cuda, or specifically containers on perlmutter? I would prefer to start with a more precise name and relax it rather than vice-versa.
The proposed code specifies a spack-manager-based GPU-capable container on NERSC science platforms.
Build
Run
Expected Output
Helpful Hints
podman-hpc
spack-manager
depends on a pinned version of Spack, a point of configure / build / runtime fragility