From 01c16e9d87de6a2ea5a21b009acec1698ae9440f Mon Sep 17 00:00:00 2001 From: Callum Walley Date: Mon, 20 Nov 2023 00:53:15 +0000 Subject: [PATCH] Import update Mon Nov 20 00:53:15 UTC 2023 --- ...g_NeSI_Support_during_the_holiday_break.md | 20 +- ...t_and_efficient_use_of_NeSI_HPC_storage.md | 34 +-- ...o_Fair_Share_job_prioritisation_on_Maui.md | 10 +- ..._Milan_CPU_nodes_open_to_all_NeSI_users.md | 25 +- ...umps_generation_now_disabled_as_default.md | 4 +- .../Announcements/Maui_upgrade_is_complete.md | 38 +-- ...achine_Learning_and_GPU_pricing_updates.md | 14 +- .../Slurm_upgrade_to_version_21-8.md | 54 ++-- ...g_the_most_of_Mahuika's_new_Milan_nodes.md | 16 +- ..._cluster_filesystem_on_my_local_machine.md | 8 +- ...indows_style_to_UNIX_style_line_endings.md | 14 +- ...ad_only_team_members_access_to_my_files.md | 72 ++--- ...ect_team_members_read_or_write_my_files.md | 16 +- .../How_can_I_see_how_busy_the_cluster_is.md | 5 +- docs/General/FAQs/How_do_I_request_memory.md | 70 ++--- ..._I_run_my_Python_Notebook_through_SLURM.md | 4 +- .../I_have_not_scanned_the_2FA_QR_code.md | 7 +- .../FAQs/Ive_run_out_of_storage_space.md | 116 ++++---- docs/General/FAQs/Logging_in_to_the_HPCs.md | 10 +- docs/General/FAQs/Login_Troubleshooting.md | 72 ++--- docs/General/FAQs/Password_Expiry.md | 6 +- .../FAQs/Two_Factor_Authentication_FAQ.md | 4 +- ...What_are_my-bashrc_and-bash_profile_for.md | 14 +- docs/General/FAQs/What_does_oom_kill_mean.md | 10 +- ...d_for_Machine_Learning_and_data_science.md | 30 +- ..._should_I_store_my_data_on_NeSI_systems.md | 5 +- .../FAQs/Why_cant_I_log_in_using_MobaXTerm.md | 12 +- ...y_is_my_job_taking_a_long_time_to_start.md | 58 ++-- docs/General/NeSI_Policies/Access_Policy.md | 16 +- ...ccount_Requests_for_non_Tuakiri_Members.md | 30 +- .../How_we_review_applications.md | 130 ++++----- .../NeSI_Policies/Merit_allocations.md | 51 ++-- .../NeSI_Application_Support_Model.md | 40 +-- .../NeSI_Policies/NeSI_Licence_Policy.md | 46 ++-- .../NeSI_Policies/NeSI_Password_Policy.md | 15 +- .../NeSI_Policies/NeSI_Privacy_Policy.md | 6 +- .../NeSI_Policies/Postgraduate_allocations.md | 23 +- .../Proposal_Development_allocations.md | 11 +- .../Total_HPC_Resources_Available.md | 3 +- .../About_the_Release_Notes_section.md | 5 +- ...Software_for_Connecting_to_the_Clusters.md | 166 +++++------ .../Accessing_the_HPCs/Port_Forwarding.md | 88 +++--- .../Setting_Up_Two_Factor_Authentication.md | 38 +-- .../Setting_Up_and_Resetting_Your_Password.md | 83 +++--- ...ng_using_the_Ubuntu_Terminal_on_Windows.md | 18 +- .../Applying_for_a_new_NeSI_project.md | 108 ++++---- ...plying_to_join_an_existing_NeSI_project.md | 18 +- .../Creating_a_NeSI_Account_Profile.md | 30 +- ...nd_New_Allocations_on_Existing_Projects.md | 47 ++-- .../Quarterly_allocation_periods.md | 14 +- .../What_is_an_allocation.md | 7 +- .../Cheat_Sheets/Git-Reference_Sheet.md | 74 ++--- .../Cheat_Sheets/Slurm-Reference_Sheet.md | 52 ++-- .../Unix_Shell-Reference_Sheet.md | 18 +- .../Cheat_Sheets/tmux-Reference_sheet.md | 5 +- .../Getting_Help/Consultancy.md | 124 ++++----- .../Getting_Help/Job_efficiency_review.md | 36 +-- .../NeSI_wide_area_network_connectivity.md | 5 +- .../Getting_Help/System_status.md | 4 +- .../Weekly_Online_Office_Hours.md | 31 ++- .../Next_Steps/Finding_Job_Efficiency.md | 42 +-- ...Job_Scaling_Ascertaining_job_dimensions.md | 10 +- .../Next_Steps/MPI_Scaling_Example.md | 182 ++++++------ .../Moving_files_to_and_from_the_cluster.md | 65 ++--- .../Multithreading_Scaling_Example.md | 74 ++--- .../Next_Steps/Parallel_Execution.md | 103 +++---- .../Next_Steps/Submitting_your_first_job.md | 24 +- .../my-nesi-org-nz_release_notes_v2-0-1.md | 12 +- .../my-nesi-org-nz_release_notes_v2-0-3.md | 7 +- .../my-nesi-org-nz_release_notes_v2-1-0.md | 1 + .../my-nesi-org-nz_release_notes_v2-10-0.md | 2 +- .../my-nesi-org-nz_release_notes_v2-12-0.md | 9 +- .../my-nesi-org-nz_release_notes_v2-13-0.md | 14 +- .../my-nesi-org-nz_release_notes_v2-14-0.md | 6 +- .../my-nesi-org-nz_release_notes_v2-15-0.md | 20 +- .../my-nesi-org-nz_release_notes_v2-16-0.md | 4 +- .../my-nesi-org-nz_release_notes_v2-17-0.md | 6 +- .../my-nesi-org-nz_release_notes_v2-18-0.md | 28 +- .../my-nesi-org-nz_release_notes_v2-2-0.md | 4 +- .../my-nesi-org-nz_release_notes_v2-3-0.md | 3 +- .../my-nesi-org-nz_release_notes_v2-4-0.md | 2 +- .../my-nesi-org-nz_release_notes_v2-5-0.md | 6 +- .../my-nesi-org-nz_release_notes_v2-6-0.md | 4 +- .../my-nesi-org-nz_release_notes_v2-7-0.md | 4 +- .../my-nesi-org-nz_release_notes_v2-8-0.md | 8 +- .../my-nesi-org-nz_release_notes_v2-9-0.md | 4 +- .../Logging_in_to_my-nesi-org-nz.md | 6 +- .../Managing_notification_preferences.md | 8 +- ...gating_the_my-nesi-org-nz_web_interface.md | 4 +- ..._renew_an_allocation_via_my-nesi-org-nz.md | 25 +- .../The_NeSI_Project_Request_Form.md | 31 ++- .../Tuakiri_Attribute_Validator.md | 2 +- .../Billing_process.md | 6 +- .../Types_of_contracts.md | 32 +-- .../Overview/Pricing.md | 5 +- .../Overview/Questions.md | 3 +- .../Overview/What_is_a_Subscription.md | 32 +-- .../Allocation_approvals.md | 11 +- .../Service_Governance_contact.md | 2 +- .../Subscriber_Monthly_Usage_Reports.md | 26 +- ...ainer_container_on_a_Milan_compute_node.md | 30 +- .../Compiling_software_on_Mahuika.md | 188 ++++++------- .../Configuring_Dask_MPI_jobs.md | 64 ++--- .../Finding_Software.md | 10 +- .../Installing_Third_Party_applications.md | 35 +-- .../NICE_DCV_Setup.md | 204 +++++++------- .../NVIDIA_GPU_Containers.md | 76 +++--- .../Offloading_to_GPU_with_OpenACC.md | 24 +- .../Offloading_to_GPU_with_OpenMP.md | 16 +- .../OpenMP_settings.md | 4 +- .../Per_job_temporary_directories.md | 2 +- ...nt_differences_between_Maui_and_Mahuika.md | 63 ++--- .../Thread_Placement_and_Thread_Affinity.md | 56 ++-- .../Visualisation_software.md | 15 +- .../Jupyter_kernels_Manual_management.md | 72 ++--- ...upyter_kernels_Tool_assisted_management.md | 2 +- .../Jupyter_on_NeSI.md | 86 +++--- .../MATLAB_via_Jupyter_on_NeSI.md | 19 +- .../RStudio_via_Jupyter_on_NeSI.md | 19 +- .../Virtual_Desktop_via_Jupyter_on_NeSI.md | 98 +++---- .../Manuals_and_User_Guides/Manuals.md | 33 +-- .../Troubleshooting_on_NeSI.md | 5 +- .../XC50_Aries_Network_Architecture.md | 28 +- .../Profiling_and_Debugging/Debugging.md | 28 +- .../Profiler-ARM_MAP.md | 2 +- .../Profiling_and_Debugging/Profiler-VTune.md | 41 +-- .../Slurm_Native_Profiling.md | 10 +- ...er-nesi-org-nz_release_notes_02-02-2023.md | 16 +- ...er-nesi-org-nz_release_notes_02-06-2022.md | 50 ++-- ...er-nesi-org-nz_release_notes_02-11-2021.md | 3 +- ...er-nesi-org-nz_release_notes_12-05-2021.md | 10 +- ...er-nesi-org-nz_release_notes_12-07-2022.md | 5 +- ...er-nesi-org-nz_release_notes_14-10-2021.md | 4 +- ...er-nesi-org-nz_release_notes_14-11-2023.md | 4 +- ...er-nesi-org-nz_release_notes_15-06-2023.md | 4 +- ...er-nesi-org-nz_release_notes_16-09-2021.md | 15 +- ...er-nesi-org-nz_release_notes_19-05-2023.md | 2 +- ...er-nesi-org-nz_release_notes_24-09-2021.md | 2 +- ...er-nesi-org-nz_release_notes_25-08-2022.md | 30 +- ...er-nesi-org-nz_release_notes_28-06-2022.md | 4 +- ...er-nesi-org-nz_release_notes_31-03-2022.md | 2 +- ..._projects_usage_using_nn_corehour_usage.md | 4 +- .../Checksums.md | 19 +- .../Fair_Share.md | 76 +++--- .../GPU_use_on_NeSI.md | 198 +++++++------- .../Hyperthreading.md | 63 ++--- .../Job_Checkpointing.md | 18 +- .../Job_prioritisation.md | 7 +- .../Mahuika_Slurm_Partitions.md | 38 +-- .../Maui_Slurm_Partitions.md | 7 +- .../Milan_Compute_Nodes.md | 14 +- .../NetCDF-HDF5_file_locking.md | 6 +- .../SLURM-Best_Practice.md | 2 +- .../Slurm_Interactive_Sessions.md | 90 +++--- .../Supported_Applications/ABAQUS.md | 36 +-- .../Supported_Applications/ANSYS.md | 186 ++++++------- .../Supported_Applications/AlphaFold.md | 50 ++-- .../Supported_Applications/BLAST.md | 11 +- .../Supported_Applications/BRAKER.md | 42 +-- .../Supported_Applications/CESM.md | 1 + .../Supported_Applications/COMSOL.md | 38 +-- .../Supported_Applications/Clair3 .md | 27 +- .../Supported_Applications/Cylc.md | 66 ++--- .../Supported_Applications/Delft3D.md | 4 +- .../Supported_Applications/Dorado.md | 10 +- .../Supported_Applications/FDS.md | 17 +- .../Find execution hot spots with VTune.md | 17 +- .../Supported_Applications/GATK.md | 23 +- .../Supported_Applications/GROMACS.md | 22 +- .../Supported_Applications/Gaussian.md | 32 +-- .../Supported_Applications/Java.md | 17 +- .../Supported_Applications/Julia.md | 190 ++++++------- .../Supported_Applications/JupyterLab.md | 120 ++++---- .../Supported_Applications/Keras.md | 23 +- .../Supported_Applications/Lambda Stack.md | 25 +- .../Supported_Applications/MAKER.md | 7 +- .../Supported_Applications/MATLAB.md | 100 +++---- .../Supported_Applications/Miniconda3.md | 82 +++--- .../Supported_Applications/ORCA.md | 10 +- .../Supported_Applications/OpenFOAM.md | 104 +++---- .../Supported_Applications/OpenSees.md | 5 +- .../Supported_Applications/ParaView.md | 52 ++-- .../Supported_Applications/Python.md | 122 ++++----- .../Supported_Applications/R.md | 68 ++--- .../Supported_Applications/RAxML.md | 8 +- .../Supported_Applications/Relion.md | 7 +- .../Supported_Applications/Singularity.md | 54 ++-- .../Software Installation Request.md | 42 +-- .../Supported_Applications/Supernova.md | 56 ++-- .../Supported_Applications/Synda.md | 2 +- .../TensorFlow on CPUs.md | 9 +- .../TensorFlow on GPUs.md | 182 ++++++------ .../Supported_Applications/Trinity.md | 50 ++-- .../Supported_Applications/TurboVNC.md | 220 +++++++-------- .../Supported_Applications/VASP.md | 106 +++---- .../Supported_Applications/VirSorter.md | 11 +- .../Supported_Applications/WRF.md | 22 +- .../Supported_Applications/ipyrad.md | 8 +- .../Supported_Applications/ont-guppy-gpu.md | 18 +- .../Supported_Applications/snpEff.md | 31 ++- .../Terminal_Setup/Git_Bash_Windows.md | 82 +++--- .../Terminal_Setup/MobaXterm_Setup_Windows.md | 84 +++--- .../Terminal_Setup/Standard_Terminal_Setup.md | 106 +++---- .../Ubuntu_LTS_terminal_Windows_10.md | 70 ++--- .../WinSCP-PuTTY_Setup_Windows.md | 60 ++-- .../Windows_Subsystem_for_Linux_WSL.md | 20 +- .../Terminal_Setup/X11_on_NeSI.md | 22 +- .../Available_GPUs_on_NeSI.md | 4 +- .../Mahuika.md | 21 +- .../Maui.md | 31 ++- .../Maui_Ancillary.md | 44 +-- .../Overview.md | 38 +-- ...troduction_to_computing_on_the_NeSI_HPC.md | 8 +- .../Scientific_Computing/Training/Webinars.md | 77 +++--- .../Training/Workshops.md | 16 +- docs/Storage/Data_Recovery/File_Recovery.md | 14 +- .../Data_Transfer_using_Globus_V5.md | 68 ++--- ..._without_NeSI_two_factor_authentication.md | 28 +- ...d_share_CMIP6_data_for_NIWA_researchers.md | 10 +- .../Globus_Quick_Start_Guide.md | 35 +-- ...V5_Paths-Permissions-Storage_Allocation.md | 19 +- .../Globus_V5_endpoint_activation.md | 13 +- ...obus_Sign_Up-and_your_Globus_Identities.md | 27 +- .../National_Data_Transfer_Platform.md | 4 +- .../Personal_Globus_Endpoint_Configuration.md | 2 +- ..._Collections_and_Bookmarks_in_Globus_V5.md | 63 ++--- ...d_another_computer_with_globus_automate.md | 62 ++--- ...omatic_cleaning_of_nobackup_file_system.md | 148 +++++----- .../Data_Compression.md | 38 +-- .../File_permissions_and_groups.md | 86 +++--- .../I-O_Performance_Considerations.md | 23 +- .../NeSI_File_Systems_and_Quotas.md | 72 ++--- .../Nearline_Long_Term_Storage_Service.md | 258 +++++++++--------- ...files_for_migration_to_Nearline_storage.md | 105 +++---- .../Verifying_uploads_to_Nearline_storage.md | 64 ++--- ...torage_Nearline_release_notes_v1-1-0-14.md | 24 +- ...torage_Nearline_release_notes_v1-1-0-21.md | 60 ++-- ...torage_Nearline_release_notes_v1-1-0-22.md | 20 +- ...torage_Nearline_release_notes_v1-1-0-18.md | 56 ++-- ...torage_Nearline_release_notes_v1-1-0-19.md | 56 ++-- ...torage_Nearline_release_notes_v1-1-0-20.md | 46 ++-- docs/assets/images/OpenFOAM_0.png | 2 +- 242 files changed, 4668 insertions(+), 4601 deletions(-) diff --git a/docs/General/Announcements/Accessing_NeSI_Support_during_the_holiday_break.md b/docs/General/Announcements/Accessing_NeSI_Support_during_the_holiday_break.md index be2267276..bd6762a9a 100644 --- a/docs/General/Announcements/Accessing_NeSI_Support_during_the_holiday_break.md +++ b/docs/General/Announcements/Accessing_NeSI_Support_during_the_holiday_break.md @@ -37,21 +37,21 @@ A quick reminder of our main support channels as well as other sources of self-service support: - [Submit a ticket to -Support](https://support.nesi.org.nz/hc/en-gb/requests/new "https://support.nesi.org.nz/hc/en-gb/requests/new") (Note: -non-emergency requests will be addressed on or after 03 January -2024) + Support](https://support.nesi.org.nz/hc/en-gb/requests/new "https://support.nesi.org.nz/hc/en-gb/requests/new") (Note: + non-emergency requests will be addressed on or after 03 January + 2024) - [Sign up for NeSI system status -updates](https://support.nesi.org.nz/hc/en-gb/articles/360000751636 "https://support.nesi.org.nz/hc/en-gb/articles/360000751636") for -advance warning of any system updates or unplanned outages. + updates](https://support.nesi.org.nz/hc/en-gb/articles/360000751636 "https://support.nesi.org.nz/hc/en-gb/articles/360000751636") for + advance warning of any system updates or unplanned outages.  - [Consult our User -Documentation](https://support.nesi.org.nz/hc/en-gb/categories/360000013836 "https://support.nesi.org.nz/hc/en-gb/categories/360000013836") pages -for instructions and guidelines for using the systems + Documentation](https://support.nesi.org.nz/hc/en-gb/categories/360000013836 "https://support.nesi.org.nz/hc/en-gb/categories/360000013836") pages + for instructions and guidelines for using the systems - [Visit NeSI’s YouTube -channel](https://www.youtube.com/playlist?list=PLvbRzoDQPkuGMWazx5LPA6y8Ji6tyl0Sp "https://www.youtube.com/playlist?list=PLvbRzoDQPkuGMWazx5LPA6y8Ji6tyl0Sp") for -introductory training webinars + channel](https://www.youtube.com/playlist?list=PLvbRzoDQPkuGMWazx5LPA6y8Ji6tyl0Sp "https://www.youtube.com/playlist?list=PLvbRzoDQPkuGMWazx5LPA6y8Ji6tyl0Sp") for + introductory training webinars On behalf of the entire NeSI team, we wish you a safe and relaxing -holiday. \ No newline at end of file +holiday.  \ No newline at end of file diff --git a/docs/General/Announcements/Improved_data_management_and_efficient_use_of_NeSI_HPC_storage.md b/docs/General/Announcements/Improved_data_management_and_efficient_use_of_NeSI_HPC_storage.md index 77cee8a79..912478c38 100644 --- a/docs/General/Announcements/Improved_data_management_and_efficient_use_of_NeSI_HPC_storage.md +++ b/docs/General/Announcements/Improved_data_management_and_efficient_use_of_NeSI_HPC_storage.md @@ -27,12 +27,12 @@ data management policies and best practices for our HPC facilities. By adopting these measures to regularly audit, clean and manage the amount of data on our filesystems, we’ll ensure they remain high-performing and responsive to your research computing workloads and -data science workflows. - +data science workflows. + ## Upcoming changes to data management processes for project directories -** +** 4-15 October 2021** The NeSI project filesystem is becoming critically full, however it is @@ -63,14 +63,14 @@ and we will consider whether a [Nearline](https://support.nesi.org.nz/hc/en-gb/articles/360001169956-Long-Term-Storage-Service "https://support.nesi.org.nz/hc/en-gb/articles/360001169956-Long-Term-Storage-Service") storage allocation would be appropriate to manage this. - +  **18 October 2021** We will begin a limited roll-out of a new feature to automatically identify inactive files in  `/nesi/project/` directories and schedule them for deletion. Generally, we will be looking to identify files that -are inactive / untouched for more than 12 months. +are inactive / untouched for more than 12 months.  A selection of active projects will be invited to participate in this first phase of the programme. If you would like to volunteer to be an @@ -86,7 +86,7 @@ Alongside this work, we will also adopt a new policy on how long inactive data may be stored on NeSI systems, particularly once a research project itself becomes inactive. - +  **January 2022** @@ -95,13 +95,13 @@ data management programme to include all active projects on NeSI. Additional Support documentation and user information sessions will be hosted prior to wider implementation, to provide advance notice of the change and to answer any questions you may have around data lifecycle -management. - +management.  +  ## Frequently asked questions -**1) Why are you introducing these new data management processes? +**1) Why are you introducing these new data management processes? **We want to avoid our online filesystems reaching critically full levels, as that impacts their performance and availability for users. We also want to ensure our active storage filesystems aren't being used to @@ -110,19 +110,19 @@ for `/nesi/project/` directories will complement our existing programme of [automatic cleaning of the /nobackup file system](https://support.nesi.org.nz/hc/en-gb/articles/360001162856 "https://support.nesi.org.nz/hc/en-gb/articles/360001162856"). - +  **2) Can I check how much storage I’m currently using on NeSI systems?** You can query your actual usage and disk allocations at any time using -the following command: +the following command:  `$ nn_storage_quota` The values for 'nn\_storage\_quota' are updated approximately every hour and cached between updates. - +  **3) Can I recover data that I accidentally delete from my /project directory?** @@ -132,7 +132,7 @@ them for up to seven days. For more information, [refer to our File Recovery page](https://support.nesi.org.nz/hc/en-gb/articles/360000207315-File-Recovery "https://support.nesi.org.nz/hc/en-gb/articles/360000207315-File-Recovery"). - +  **4) Where should I store my data on NeSI systems?** @@ -145,9 +145,9 @@ used to build and edit code, provided that the code is under version control and changes are regularly checked into upstream revision control systems. The **long-term storage service** should be used for larger datasets that you only access occasionally and do not need to change in -situ. - +situ.  +  **5) What should I do if I run out of storage space?** @@ -156,7 +156,7 @@ space* and *inodes (number of files)*. If you run into problems with either of these, [refer to this Support page for more information](https://support.nesi.org.nz/hc/en-gb/articles/360001125996-I-ve-run-out-of-storage-space "https://support.nesi.org.nz/hc/en-gb/articles/360001125996-I-ve-run-out-of-storage-space"). - +  **6) I have questions that aren’t covered here. Who can I talk to?** @@ -165,7 +165,7 @@ Support](https://support.nesi.org.nz/hc/en-gb/requests/new "https://support.nesi No question is too big or small and our intention is always to work with you to find the best way to manage your research data. - +  ## More information diff --git a/docs/General/Announcements/Improvements_to_Fair_Share_job_prioritisation_on_Maui.md b/docs/General/Announcements/Improvements_to_Fair_Share_job_prioritisation_on_Maui.md index 7f597d59a..037ddf979 100644 --- a/docs/General/Announcements/Improvements_to_Fair_Share_job_prioritisation_on_Maui.md +++ b/docs/General/Announcements/Improvements_to_Fair_Share_job_prioritisation_on_Maui.md @@ -55,11 +55,11 @@ We have now recalculated the shares for each pool to take into account the following: - The investments into HPC platforms by the various collaborating -institutions and by MBIE; + institutions and by MBIE; - The capacity of each HPC platform; - The split of requested time (allocations) by project teams between -the Māui and Mahuika HPC platforms, both overall and within each -institution's pool. + the Māui and Mahuika HPC platforms, both overall and within each + institution's pool. Under this scheme, any job's priority is affected by the behaviour of other workload within the same project team, but also other project @@ -68,9 +68,9 @@ has been under-using compared to your allocation, your jobs may still be held up if: - Other project teams at your institution (within your pool) have been -over-using compared to their allocations, or + over-using compared to their allocations, or - Your institution has approved project allocations totalling more -time than it is entitled to within its pool's share. + time than it is entitled to within its pool's share. ## What will I notice? diff --git a/docs/General/Announcements/Mahuika's_new_Milan_CPU_nodes_open_to_all_NeSI_users.md b/docs/General/Announcements/Mahuika's_new_Milan_CPU_nodes_open_to_all_NeSI_users.md index d60781a00..cde27e69a 100644 --- a/docs/General/Announcements/Mahuika's_new_Milan_CPU_nodes_open_to_all_NeSI_users.md +++ b/docs/General/Announcements/Mahuika's_new_Milan_CPU_nodes_open_to_all_NeSI_users.md @@ -31,37 +31,38 @@ research needs. **What’s new** - faster, more powerful computing, enabled by AMD 3rd Gen EPYC Milan -architecture + architecture - specialised high-memory capabilities, allowing rapid simultaneous -processing + processing - improved energy efficiency - these nodes are 2.5 times more power -efficient than Mahuika’s original Broadwell nodes + efficient than Mahuika’s original Broadwell nodes **How to access** - Visit our Support portal for [instructions to get -started](https://support.nesi.org.nz/hc/en-gb/articles/6367209795471-Milan-Compute-Nodes "https://support.nesi.org.nz/hc/en-gb/articles/6367209795471-Milan-Compute-Nodes") -and details of how the Milan nodes differ from Mahuika’s original -Broadwell nodes + started](https://support.nesi.org.nz/hc/en-gb/articles/6367209795471-Milan-Compute-Nodes "https://support.nesi.org.nz/hc/en-gb/articles/6367209795471-Milan-Compute-Nodes") + and details of how the Milan nodes differ from Mahuika’s original + Broadwell nodes **Learn more** - [Watch this webinar](https://youtu.be/IWRZLl__uhg) sharing a quick -overview of the new resources and some tips for making the most of -the nodes. + overview of the new resources and some tips for making the most of + the nodes. - Bring questions to our [weekly Online Office -Hours](https://support.nesi.org.nz/hc/en-gb/articles/4830713922063-Weekly-Online-Office-Hours "https://support.nesi.org.nz/hc/en-gb/articles/4830713922063-Weekly-Online-Office-Hours") + Hours](https://support.nesi.org.nz/hc/en-gb/articles/4830713922063-Weekly-Online-Office-Hours "https://support.nesi.org.nz/hc/en-gb/articles/4830713922063-Weekly-Online-Office-Hours") - [Email NeSI -Support](mailto:support@nesi.org.nz "mailto:support@nesi.org.nz") -any time - + Support](mailto:support@nesi.org.nz "mailto:support@nesi.org.nz") + any time +  If you have feedback on the new nodes or suggestions for improving your experience getting started with or using any of our systems, please [get in touch](mailto:support@nesi.org.nz "mailto:support@nesi.org.nz"). +  \ No newline at end of file diff --git a/docs/General/Announcements/Mahuika-Core_Dumps_generation_now_disabled_as_default.md b/docs/General/Announcements/Mahuika-Core_Dumps_generation_now_disabled_as_default.md index e36bf0941..de027b428 100644 --- a/docs/General/Announcements/Mahuika-Core_Dumps_generation_now_disabled_as_default.md +++ b/docs/General/Announcements/Mahuika-Core_Dumps_generation_now_disabled_as_default.md @@ -24,10 +24,10 @@ zendesk_section_id: 200732737 [//]: <> (^^^^^^^^^^^^^^^^^^^^) [//]: <> (REMOVE ME IF PAGE VALIDATED) -A Slurm configuration change has been made on Mahuika so that the +A Slurm configuration change has been made on Mahuika so that the  maximum size of [core file](https://support.nesi.org.nz/hc/en-gb/articles/360001584875-What-is-a-core-file-) that can be generated inside a job now defaults to `0` bytes rather -than `unlimited`. +than `unlimited`.  You can reenable core dumps with `ulimit -c unlimited` . \ No newline at end of file diff --git a/docs/General/Announcements/Maui_upgrade_is_complete.md b/docs/General/Announcements/Maui_upgrade_is_complete.md index 65bc02525..f2bc028fe 100644 --- a/docs/General/Announcements/Maui_upgrade_is_complete.md +++ b/docs/General/Announcements/Maui_upgrade_is_complete.md @@ -53,33 +53,33 @@ rebuilt and/or updated versions of these applications (though this will be an ongoing effort post-upgrade). The following information will help your transition from the pre-upgrade -Māui environment to the post-upgrade one: +Māui environment to the post-upgrade one:  - The three main toolchains (CrayCCE, CrayGNU and CrayIntel) have all -been updated to release 23.02 (CrayCCE and CrayGNU) and 23.02-19 -(CrayIntel). **The previously installed versions are no longer -available**. + been updated to release 23.02 (CrayCCE and CrayGNU) and 23.02-19 + (CrayIntel). **The previously installed versions are no longer + available**. - Consequently, nearly all of the previously provided **environment -modules have been replaced by new versions**. You can use the -*module avail* command to see what versions of those software -packages are now available. If your batch scripts load exact module -versions, they will need updating. + modules have been replaced by new versions**. You can use the + *module avail* command to see what versions of those software + packages are now available. If your batch scripts load exact module + versions, they will need updating. - The few jobs in the Slurm queue at the start of the upgrade process -have been placed in a “user hold” state. You have the choice of -cancelling them with *scancel <jobid>* or releasing them with -*scontrol release <jobid>*. + have been placed in a “user hold” state. You have the choice of + cancelling them with *scancel <jobid>* or releasing them with + *scontrol release <jobid>*. - Be aware that if you have jobs submitted that rely on any software -built before the upgrade, there is a good chance that this software -will not run. **We recommend rebuilding any binaries you maintain** -before running jobs that utilise those binaries. + built before the upgrade, there is a good chance that this software + will not run. **We recommend rebuilding any binaries you maintain** + before running jobs that utilise those binaries. - Note that Māui login does not require adding a second factor to the -password when authenticating on the Māui login node after the first -successful login attempt. That is, if you have successfully logged -in using <first factor><second factor> format, no second -factor part will be required later on. + password when authenticating on the Māui login node after the first + successful login attempt. That is, if you have successfully logged + in using <first factor><second factor> format, no second + factor part will be required later on. We have also updated our support documentation for Māui to reflect the -changes, so please review it before starting any new projects. +changes, so please review it before starting any new projects.  ## Software Changes diff --git a/docs/General/Announcements/New_capabilities_for_Machine_Learning_and_GPU_pricing_updates.md b/docs/General/Announcements/New_capabilities_for_Machine_Learning_and_GPU_pricing_updates.md index 7fef11950..de58ea80c 100644 --- a/docs/General/Announcements/New_capabilities_for_Machine_Learning_and_GPU_pricing_updates.md +++ b/docs/General/Announcements/New_capabilities_for_Machine_Learning_and_GPU_pricing_updates.md @@ -22,7 +22,7 @@ zendesk_section_id: 200732737 We’re excited to announce an addition of new GPU capabilities to our platform and some noteworthy changes to resource pricing as a result. - +  **New Graphics Processing Units (GPUs)** @@ -32,7 +32,7 @@ providing a significant boost in computing performance and an environment particularly suited to machine learning workloads. Over the last few months we’ve worked directly with a group of beta tester researchers to ensure this new capability is fit-for-purpose and tuned -to communities' specific software and tool requirements. +to communities' specific software and tool requirements.  These new A100s, alongside [software optimised for data science](https://support.nesi.org.nz/hc/en-gb/articles/360004558895-What-software-environments-on-NeSI-are-optimised-for-Machine-Learning-approaches-), @@ -41,7 +41,7 @@ this is you, [contact NeSI Support](mailto:https://support.nesi.org.nz/hc/en-gb/requests/new) to discuss how these new resources could support your work. - +  **Reduced pricing for P100s** @@ -65,7 +65,7 @@ you have questions about allocations or how to access the P100s, [contact NeSI Support](mailto:https://support.nesi.org.nz/hc/en-gb/requests/new). - +  **Sharing our learning along the way** @@ -81,7 +81,7 @@ conducted in the spaces of deep learning and molecular dynamics codes, as well as take a closer look at which codes are suitable to run on GPUs and whether your research project is a fit. - +  **Future GPU investments** @@ -99,13 +99,13 @@ A100s for something other than machine learning, let us know by Support](mailto:https://support.nesi.org.nz/hc/en-gb/requests/new) - that way we can keep you up to date on our plans. - +  If you have questions or comments on anything mentioned above, please [get in touch](https://support.nesi.org.nz/hc/en-gb/requests/new). - +  Thank you, diff --git a/docs/General/Announcements/Slurm_upgrade_to_version_21-8.md b/docs/General/Announcements/Slurm_upgrade_to_version_21-8.md index 46b506f50..f93ffba59 100644 --- a/docs/General/Announcements/Slurm_upgrade_to_version_21-8.md +++ b/docs/General/Announcements/Slurm_upgrade_to_version_21-8.md @@ -28,57 +28,57 @@ zendesk_section_id: 200732737 - Add time specification: "now-" (i.e. subtract from the present) - AllocGres and ReqGres were removed. Alloc/ReqTres should be used -instead. + instead.  - MAGNETIC flag on reservations. Reservations the user doesn't have to -even request. + even request. - The LicensesUsed line has been removed from `scontrol show config` . -Please use updated `scontrol show licenses` command as an -alternative. + Please use updated `scontrol show licenses` command as an + alternative. -  `--threads-per-core` now influences task layout/binding, not just -allocation. + allocation. - `--gpus-per-node` can be used instead of `--gres=GPU` - `--hint=nomultithread` can now be replaced -with `--threads-per-core=1` + with `--threads-per-core=1` - The inconsistent terminology and environment variable naming for -Heterogeneous Job ("HetJob") support has been tidied up. + Heterogeneous Job ("HetJob") support has been tidied up. - The correct term for these jobs are "HetJobs", references to -"PackJob"   have been corrected. + "PackJob"   have been corrected. - The correct term for the separate constituent jobs are -"components",   references to "packs" have been corrected. + "components",   references to "packs" have been corrected. - Added support for an "Interactive Step", designed to be used with -salloc to launch a terminal on an allocated compute node -automatically. Enable by setting "use\_interactive\_step" as part of -LaunchParameters. + salloc to launch a terminal on an allocated compute node + automatically. Enable by setting "use\_interactive\_step" as part of + LaunchParameters. -  By default, a step started with srun will be granted exclusive (or -non- overlapping) access to the resources assigned to that step. No -other parallel step will be allowed to run on the same resources at -the same time. This replaces one facet of the '--exclusive' option's -behavior, but does not imply the '--exact' option described below. -To get the previous default behavior - which allowed parallel steps -to share all resources - use the new srun '--overlap' option. + non- overlapping) access to the resources assigned to that step. No + other parallel step will be allowed to run on the same resources at + the same time. This replaces one facet of the '--exclusive' option's + behavior, but does not imply the '--exact' option described below. + To get the previous default behavior - which allowed parallel steps + to share all resources - use the new srun '--overlap' option. - In conjunction to this non-overlapping step allocation behavior -being the new default, there is an additional new option for step -management '--exact', which will allow a step access to only those -resources requested by the step. This is the second half of the -'--exclusive' behavior. Otherwise, by default all non-gres resources -on each node in the allocation will be used by the step, making it -so no other parallel step will have access to those resources unless -both steps have specified '--overlap'. + being the new default, there is an additional new option for step + management '--exact', which will allow a step access to only those + resources requested by the step. This is the second half of the + '--exclusive' behavior. Otherwise, by default all non-gres resources + on each node in the allocation will be used by the step, making it + so no other parallel step will have access to those resources unless + both steps have specified '--overlap'. - New command which permits crontab-compatible job scripts to be -defined. These scripts will recur automatically (at most) on the -intervals described. \ No newline at end of file + defined. These scripts will recur automatically (at most) on the + intervals described. \ No newline at end of file diff --git a/docs/General/Announcements/Upcoming_webinar-Tips_for_making_the_most_of_Mahuika's_new_Milan_nodes.md b/docs/General/Announcements/Upcoming_webinar-Tips_for_making_the_most_of_Mahuika's_new_Milan_nodes.md index 986b39dba..d0bd3ce9d 100644 --- a/docs/General/Announcements/Upcoming_webinar-Tips_for_making_the_most_of_Mahuika's_new_Milan_nodes.md +++ b/docs/General/Announcements/Upcoming_webinar-Tips_for_making_the_most_of_Mahuika's_new_Milan_nodes.md @@ -26,13 +26,13 @@ within their research teams. Join us on Thursday 30 March for a short webinar sharing some practical tips and tricks for making the most of these new resources: -**Making the most of Mahuika's new Milan nodes -Thursday 30 March** -**11:30 am - 12:00 pm** +**Making the most of Mahuika's new Milan nodes +Thursday 30 March** +**11:30 am - 12:00 pm** **[Click here to RSVP](https://www.eventbrite.co.nz/e/webinar-making-the-most-of-mahuikas-new-milan-nodes-registration-557428302057)** -*Background:* +*Background:* Following a successful early access programme, Mahuika’s newest CPU nodes are now available for use by any projects that have a Mahuika allocation on NeSI's HPC Platform. The production launch of these new @@ -43,13 +43,13 @@ design of our platforms to meet your research needs. *What’s new* - faster, more powerful computing, enabled by AMD 3rd Gen EPYC Milan -architecture + architecture - specialised high-memory capabilities, allowing rapid simultaneous -processing + processing - improved energy efficiency - these nodes are 2.5 times more power -efficient than Mahuika’s original Broadwell nodes + efficient than Mahuika’s original Broadwell nodes Come along to [this webinar](https://www.eventbrite.co.nz/e/webinar-making-the-most-of-mahuikas-new-milan-nodes-registration-557428302057) @@ -67,4 +67,4 @@ If you're unable to join us for this session but have questions about the Milan nodes or would like more information, come along to one of our [weekly Online Office Hours](https://support.nesi.org.nz/hc/en-gb/articles/4830713922063) or -email anytime. \ No newline at end of file +email anytime.  \ No newline at end of file diff --git a/docs/General/FAQs/Can_I_use_SSHFS_to_mount_the_cluster_filesystem_on_my_local_machine.md b/docs/General/FAQs/Can_I_use_SSHFS_to_mount_the_cluster_filesystem_on_my_local_machine.md index 8e5eebd95..e6e20fae0 100644 --- a/docs/General/FAQs/Can_I_use_SSHFS_to_mount_the_cluster_filesystem_on_my_local_machine.md +++ b/docs/General/FAQs/Can_I_use_SSHFS_to_mount_the_cluster_filesystem_on_my_local_machine.md @@ -63,8 +63,8 @@ and give the volume a sensible name: # create a mount point and connect mkdir -p ~/mahuika-home sshfs mahuika: ~/mahuika-home \ --oauto_cache,follow_symlinks \ --ovolname=MahuikaHome,defer_permissions,noappledouble,local + -oauto_cache,follow_symlinks \ + -ovolname=MahuikaHome,defer_permissions,noappledouble,local ``` To unmount the directory on MacOS, either eject from Finder or run: @@ -73,5 +73,5 @@ To unmount the directory on MacOS, either eject from Finder or run: umount ~/mahuika-home ``` !!! prerequisite Note -Newer MacOS does not come with SSHFS pre installed. You will have to -install FUSE as SSHFS from [here](https://osxfuse.github.io/). \ No newline at end of file + Newer MacOS does not come with SSHFS pre installed. You will have to + install FUSE as SSHFS from [here](https://osxfuse.github.io/). \ No newline at end of file diff --git a/docs/General/FAQs/Converting_from_Windows_style_to_UNIX_style_line_endings.md b/docs/General/FAQs/Converting_from_Windows_style_to_UNIX_style_line_endings.md index 645cfa80c..6c27d3d18 100644 --- a/docs/General/FAQs/Converting_from_Windows_style_to_UNIX_style_line_endings.md +++ b/docs/General/FAQs/Converting_from_Windows_style_to_UNIX_style_line_endings.md @@ -32,9 +32,9 @@ Unfortunately, the programmers of different operating systems have represented line endings using different sequences: - All versions of Microsoft Windows represent line endings as CR -followed by LF. + followed by LF. - UNIX and UNIX-like operating systems (including Mac OS X) represent -line endings as LF alone. + line endings as LF alone. Therefore, a text file prepared in a Windows environment will, when copied to a UNIX-like environment such as a NeSI cluster, have an @@ -57,7 +57,7 @@ If you submit (using `sbatch`) a Slurm submission script with Windows-style line endings, you will likely receive the following error: ``` bash -sbatch: error: Batch script contains DOS line breaks (\r\n) +sbatch: error: Batch script contains DOS line breaks (\r\n) sbatch: error: instead of expected UNIX line breaks (\n). ``` @@ -69,7 +69,7 @@ variable, but program behaviours might include the following responses: - Explicitly stating the problem with line endings - Complaining more vaguely that the input data is incomplete or -corrupt or that there are problems reading it + corrupt or that there are problems reading it - Failing in a more serious way such as a segmentation fault ## Checking a file's line ending format @@ -109,10 +109,10 @@ a box containing the current line ending format. - In most cases, this box will contain the text "DOS\Windows". - In a few cases, such as the file having been prepared on a UNIX or -Linux machine or a Mac, it will contain the text "UNIX". + Linux machine or a Mac, it will contain the text "UNIX". - It is possible, though highly unlikely by now, that the file may -have old-style (pre-OSX) Mac line endings, in which case the box -will contain the text "Macintosh". + have old-style (pre-OSX) Mac line endings, in which case the box + will contain the text "Macintosh". Please note that if you change a file's line ending style, you must save your changes before copying the file anywhere, including to a cluster. diff --git a/docs/General/FAQs/How_can_I_give_read_only_team_members_access_to_my_files.md b/docs/General/FAQs/How_can_I_give_read_only_team_members_access_to_my_files.md index a676238fd..b17c08905 100644 --- a/docs/General/FAQs/How_can_I_give_read_only_team_members_access_to_my_files.md +++ b/docs/General/FAQs/How_can_I_give_read_only_team_members_access_to_my_files.md @@ -20,8 +20,8 @@ zendesk_section_id: 360000039036 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite See also -[File permissions and -groups](https://support.nesi.org.nz/hc/en-gb/articles/360000205435) + [File permissions and + groups](https://support.nesi.org.nz/hc/en-gb/articles/360000205435) Not all projects have read-only groups created by default. If your project has a read-only group created after the project itself was @@ -34,57 +34,57 @@ following commands explain how to do this;  when running the commands, replace `nesi12345` and `nesi12345r` with your project code and read-only project code respectively. !!! prerequisite Warning -If this process is interrupted part-way through, for example due to -your computer going to sleep and losing its connection to your NeSI -terminal session, your files can end up in a bad way. For this reason -please **run all the following commands in a `screen` or `tmux` -session.** + If this process is interrupted part-way through, for example due to + your computer going to sleep and losing its connection to your NeSI + terminal session, your files can end up in a bad way. For this reason + please **run all the following commands in a `screen` or `tmux` + session.** 1. Prepare a file containing the ACL to add. Ensure you include the -`mask` line. Note that the script will not remove any of the -existing ACL, except for overwriting existing lines that are the -same, up to the second colon, as one of the new lines you ask to -add. + `mask` line. Note that the script will not remove any of the + existing ACL, except for overwriting existing lines that are the + same, up to the second colon, as one of the new lines you ask to + add. -``` sl -echo "mask::rwxc" > acl_to_add.txt -echo "group:nesi12345r:r-x-" >> acl_to_add.txt -``` + ``` sl + echo "mask::rwxc" > acl_to_add.txt + echo "group:nesi12345r:r-x-" >> acl_to_add.txt + ``` 2. Check that the contents of the file are correct. -``` sl -cat acl_to_add.txt -``` + ``` sl + cat acl_to_add.txt + ``` 3. Carry out the ACL change. You can specify a subdirectory instead if, -as may well be the case, you don't want to trawl through the -entirety of `/nesi/project/nesi12345` or `/nesi/nobackup/nesi12345`. + as may well be the case, you don't want to trawl through the + entirety of `/nesi/project/nesi12345` or `/nesi/nobackup/nesi12345`. -``` sl -nn_add_to_acls_recursively -f acl_to_add.txt /nesi/project/nesi12345 -``` + ``` sl + nn_add_to_acls_recursively -f acl_to_add.txt /nesi/project/nesi12345 + ``` 4. Check the resulting ACLs, for example: -``` sl -/usr/lpp/mmfs/bin/mmgetacl /nesi/project/nesi12345/some_dir -/usr/lpp/mmfs/bin/mmgetacl -d /nesi/project/nesi12345/some_dir -``` + ``` sl + /usr/lpp/mmfs/bin/mmgetacl /nesi/project/nesi12345/some_dir + /usr/lpp/mmfs/bin/mmgetacl -d /nesi/project/nesi12345/some_dir + ``` -We suggest to check at least one subdirectory, at least one -executable file (if there is one) and at least one non-executable -file. + We suggest to check at least one subdirectory, at least one + executable file (if there is one) and at least one non-executable + file. 5. Repeat steps 3 and 4 for other directories within -`/nesi/project/nesi12345` and `/nesi/nobackup/nesi12345`, with the -necessary modifications. + `/nesi/project/nesi12345` and `/nesi/nobackup/nesi12345`, with the + necessary modifications. 6. Optionally, remove your ACL file. -``` sl -rm acl_to_add.txt -``` + ``` sl + rm acl_to_add.txt + ``` 7. Optionally, exit the `screen` or `tmux` session when you are -finished. \ No newline at end of file + finished. \ No newline at end of file diff --git a/docs/General/FAQs/How_can_I_let_my_fellow_project_team_members_read_or_write_my_files.md b/docs/General/FAQs/How_can_I_let_my_fellow_project_team_members_read_or_write_my_files.md index 88b0c5634..5eca0e603 100644 --- a/docs/General/FAQs/How_can_I_let_my_fellow_project_team_members_read_or_write_my_files.md +++ b/docs/General/FAQs/How_can_I_let_my_fellow_project_team_members_read_or_write_my_files.md @@ -20,8 +20,8 @@ zendesk_section_id: 360000039036 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite See also -[File permissions and -groups](https://support.nesi.org.nz/hc/en-gb/articles/360000205435) + [File permissions and + groups](https://support.nesi.org.nz/hc/en-gb/articles/360000205435) If you move or copy a file or directory from one project directory to another, or from somewhere within your home directory to somewhere @@ -50,10 +50,10 @@ advanced version of `scp`. `rsync` is typically used to copy files between two or more machines, but can also be used within the same machine. !!! prerequisite Warning -In both these commands, the `--no-perms` and `--no-group` options must -both come after `-a`. `-a` implicitly asserts `--perms` and `--group`, -and will therefore override whichever -of `--no-perms` and `--no-group` come before it. + In both these commands, the `--no-perms` and `--no-group` options must + both come after `-a`. `-a` implicitly asserts `--perms` and `--group`, + and will therefore override whichever + of `--no-perms` and `--no-group` come before it. ## To copy a file (or directory and its contents), updating its group and setting its permissions @@ -63,8 +63,8 @@ rsync -a --no-perms --no-group --chmod=ugo=rwX,Dg+s /path/to/source /path/to/des ## To move a file (or directory and its contents), updating its group and setting its permissions !!! prerequisite Warning -The `--remove-source-files` option is safe only if every source file -is otherwise left intact during the moving process. + The `--remove-source-files` option is safe only if every source file + is otherwise left intact during the moving process. ``` sl rsync --remove-source-files -a --no-perms --no-group --chmod=ugo=rwX,Dg+s /path/to/source /path/to/destination diff --git a/docs/General/FAQs/How_can_I_see_how_busy_the_cluster_is.md b/docs/General/FAQs/How_can_I_see_how_busy_the_cluster_is.md index eedf9256d..0bb35b305 100644 --- a/docs/General/FAQs/How_can_I_see_how_busy_the_cluster_is.md +++ b/docs/General/FAQs/How_can_I_see_how_busy_the_cluster_is.md @@ -20,7 +20,7 @@ zendesk_section_id: 360000039036 [//]: <> (REMOVE ME IF PAGE VALIDATED) You can get the current status of all nodes on a cluster using the -command `sinfo`, you will get a printout like the following. +command `sinfo`, you will get a printout like the following.  *The nodelist column has been truncated for readability.* @@ -46,7 +46,7 @@ hugemem up 1-infini 7-00:00:00 128 4:16:2 1 mixed wbh001 Each partition has a row for every state it's nodes are currently in. -For example, the `large` partition currently has  **1** `down` node, +For example, the `large` partition currently has  **1** `down` node,  **133** `mixed` nodes,  **7** `allocated` nodes and  **85** `idle` nodes. @@ -68,3 +68,4 @@ If you are interested in the state of one partition in particular you may want to use the command `squeue -p ` to get the current queue of the partition ` ` +  \ No newline at end of file diff --git a/docs/General/FAQs/How_do_I_request_memory.md b/docs/General/FAQs/How_do_I_request_memory.md index 72fd3a4e9..4f4fd3ee2 100644 --- a/docs/General/FAQs/How_do_I_request_memory.md +++ b/docs/General/FAQs/How_do_I_request_memory.md @@ -23,7 +23,7 @@ In Slurm, there are two ways to request memory for your job: - `--mem`: Memory per node - `--mem-per-cpu`: Memory per [logical -CPU](https://support.nesi.org.nz/hc/en-gb/articles/360000568236) + CPU](https://support.nesi.org.nz/hc/en-gb/articles/360000568236) In most circumstances, you should request memory using `--mem`. The exception is if you are running an MPI job that could be placed on more @@ -31,40 +31,40 @@ than one node, with tasks divided up randomly, in which case `--mem-per-cpu` is more appropriate. More detail is in the following table, including how you can tell what sort of job you're submitting. ---------------- ------------------------- --------------------- ------------------------ ------------------------ ------------------ -------------------- -Job type Requested tasks Requested logical Requested nodes (`-N`, Requested tasks per Preferred memory Ideal value -(`-n`, `--ntasks`) CPUs per task `--nodes`) node format -(`--cpus-per-task`) (`--ntasks-per-node`) - -Serial 1 (or unspecified) 1 (or unspecified) (Irrelevant, but should (Irrelevant, but should `--mem=` Peak -not be not be memory3 -specified)1 specified)2 needed by the -program - -Multithreaded 1 (or unspecified) > 1 (Irrelevant, but should (Irrelevant, but should `--mem=` Peak -(e.g. OpenMP), not be not be memory3 -but not MPI specified)1 specified)2 needed by the -program - -MPI, evenly Unspecified4 ≥ 1 (or unspecified) ≥ 15 ≥ 15 `--mem=` (Peak -split between memory3 -nodes needed per MPI -(recommended task) × (number of -method) tasks per node) - -MPI, evenly > 1 ≥ 1 (or unspecified) Either 1 or the number (Irrelevant, but should `--mem=` (Peak -split between of tasks6 not be memory3 -nodes specified)4 needed per MPI -(discouraged task) × (number of -method) tasks per node) - -MPI, randomly > 1 ≥ 1 (or unspecified) > 1; < number of (Irrelevant, but should `--mem-per-cpu=` (Peak -placed tasks6 (or not be memory3 -unspecified) specified)4 needed per MPI -task) ÷ (number of -logical CPUs per MPI -task) ---------------- ------------------------- --------------------- ------------------------ ------------------------ ------------------ -------------------- + --------------- ------------------------- --------------------- ------------------------ ------------------------ ------------------ -------------------- + Job type Requested tasks Requested logical Requested nodes (`-N`, Requested tasks per Preferred memory Ideal value + (`-n`, `--ntasks`) CPUs per task `--nodes`) node format + (`--cpus-per-task`) (`--ntasks-per-node`) + + Serial 1 (or unspecified) 1 (or unspecified) (Irrelevant, but should (Irrelevant, but should `--mem=` Peak + not be not be memory3 + specified)1 specified)2 needed by the + program + + Multithreaded 1 (or unspecified) > 1 (Irrelevant, but should (Irrelevant, but should `--mem=` Peak + (e.g. OpenMP), not be not be memory3 + but not MPI specified)1 specified)2 needed by the + program + + MPI, evenly Unspecified4 ≥ 1 (or unspecified) ≥ 15 ≥ 15 `--mem=` (Peak + split between memory3 + nodes needed per MPI + (recommended task) × (number of + method) tasks per node) + + MPI, evenly > 1 ≥ 1 (or unspecified) Either 1 or the number (Irrelevant, but should `--mem=` (Peak + split between of tasks6 not be memory3 + nodes specified)4 needed per MPI + (discouraged task) × (number of + method) tasks per node)  + + MPI, randomly > 1 ≥ 1 (or unspecified) > 1; < number of (Irrelevant, but should `--mem-per-cpu=` (Peak + placed tasks6 (or not be memory3 + unspecified) specified)4 needed per MPI + task) ÷ (number of + logical CPUs per MPI + task) + --------------- ------------------------- --------------------- ------------------------ ------------------------ ------------------ -------------------- 1 If your job consists of only one task there's no reason to request a specific number of nodes, and requesting more than one node diff --git a/docs/General/FAQs/How_do_I_run_my_Python_Notebook_through_SLURM.md b/docs/General/FAQs/How_do_I_run_my_Python_Notebook_through_SLURM.md index d00d03a6d..1f18cc9e5 100644 --- a/docs/General/FAQs/How_do_I_run_my_Python_Notebook_through_SLURM.md +++ b/docs/General/FAQs/How_do_I_run_my_Python_Notebook_through_SLURM.md @@ -30,7 +30,7 @@ accessible through the command line if you are logged in through Jupyter. ``` sl -jupyter nbconvert --to script my_notebook.ipynb +jupyter nbconvert --to script my_notebook.ipynb ``` will create a new python script called `my_notebook.py`. @@ -44,7 +44,7 @@ This option might be less convenient as the exporter saves the python file to your local computer, meaning you will have to drag it back into the file explorer in Jupyter from your downloads folder. - +  This script can then be run as a regular python script as described in our [Python](https://support.nesi.org.nz/hc/en-gb/articles/207782537) diff --git a/docs/General/FAQs/I_have_not_scanned_the_2FA_QR_code.md b/docs/General/FAQs/I_have_not_scanned_the_2FA_QR_code.md index 9246ae296..c1f37aea7 100644 --- a/docs/General/FAQs/I_have_not_scanned_the_2FA_QR_code.md +++ b/docs/General/FAQs/I_have_not_scanned_the_2FA_QR_code.md @@ -24,18 +24,19 @@ zendesk_section_id: 360000039036 [//]: <> (REMOVE ME IF PAGE VALIDATED) The QR code shown during the device registration cannot be regenerated -or displayed again. +or displayed again. If you do not capture the QR code, or lose the device storing the code -(also called a token), you will be unable to access your account. +(also called a token), you will be unable to access your account.  To have your existing token deleted so another can be generated for your account, log in to [my.nesi.org.nz](https://my.nesi.org.nz) and select the option 'Manage Two-Factor token' under 'Account'. - +  ## Related content [How to replace my 2FA token](https://support.nesi.org.nz/hc/en-gb/articles/360000684635) +  \ No newline at end of file diff --git a/docs/General/FAQs/Ive_run_out_of_storage_space.md b/docs/General/FAQs/Ive_run_out_of_storage_space.md index 168e6f878..a7f2aaba4 100644 --- a/docs/General/FAQs/Ive_run_out_of_storage_space.md +++ b/docs/General/FAQs/Ive_run_out_of_storage_space.md @@ -35,64 +35,64 @@ project_nesi99999 2T 798G 38.96% 100000 66951 66.95% nobackup_nesi99999 6.833T 10000000 2691383 26.91% ``` !!! prerequisite Note -There is a delay between making changes to a filesystem and seeing the -change in `nn_storage_quota`, immediate file count and disk space can -be found using the commands `du --inodes` and `du -h` respectively. + There is a delay between making changes to a filesystem and seeing the + change in `nn_storage_quota`, immediate file count and disk space can + be found using the commands `du --inodes` and `du -h` respectively. There are a few ways to deal with file count problems -- **Use **`/nesi/nobackup/` -The nobackup directory has a significantly higher inode count and no -disk space limits. Files here are not backed up, so best used for -intermediary or replaceable data. - -- **Delete unnecessary files** -Some applications will generate a large number of files during -runtime, using the command `du --inodes -d 1 | sort -hr` (for -inodes) or `du -h -d 1 | sort -hr` for disk space.  You can then -drill down into the directories with the largest file count deleting -files as viable. - -- **SquashFS archive (recommended)** -Many files can be compressed into a single SquashFS archive. We have -written a utility, `nn_archive_files`, to help with this process. -This utility can be run on Māui or Mahuika, but not, as yet, on -Māui-ancil; and it can submit the work as a Slurm job, which is -preferred. `nn_archive_files` can take, as trailing options, the -same options as `mksquashfs`, including choice of compression -algorithm; see `man mksquashfs` for more details. - -``` sl -nn_archive_files -p -n -t --verify -- /path/containing/files /path2/containing/files destination.squash -``` - -Then when files need to be accessed again they can be extracted -using, - -``` sl -/usr/sbin/unsquashfs destination.squash -``` - -You can do many other things with SquashFS archives, like quickly -list the files in the archive, extract some but not all of the -contents, and so on. See `man unsquashfs` for more details. - -- **Tarball (usable, but SquashFS is recommended)** -Many files can be compressed into a single 'tarball' - -``` sl -tar -czf name.tar /path/containing/files/ -``` - -Then when files need to be accessed again they can be un-tarred -using, - -``` sl -tar -xzf tarname.tar -``` - -- **Contact Support** -If you are following the recommendations here yet are still -concerned about inodes or disk space, open a [support -ticket](https://support.nesi.org.nz/hc/en-gb/requests/new) and we -can raise the limit for you. \ No newline at end of file +- **Use **`/nesi/nobackup/` + The nobackup directory has a significantly higher inode count and no + disk space limits. Files here are not backed up, so best used for + intermediary or replaceable data. + +- **Delete unnecessary files** + Some applications will generate a large number of files during + runtime, using the command `du --inodes -d 1 | sort -hr` (for + inodes) or `du -h -d 1 | sort -hr` for disk space.  You can then + drill down into the directories with the largest file count deleting + files as viable. + +- **SquashFS archive (recommended)** + Many files can be compressed into a single SquashFS archive. We have + written a utility, `nn_archive_files`, to help with this process. + This utility can be run on Māui or Mahuika, but not, as yet, on + Māui-ancil; and it can submit the work as a Slurm job, which is + preferred. `nn_archive_files` can take, as trailing options, the + same options as `mksquashfs`, including choice of compression + algorithm; see `man mksquashfs` for more details. + + ``` sl + nn_archive_files -p -n -t --verify -- /path/containing/files /path2/containing/files destination.squash + ``` + + Then when files need to be accessed again they can be extracted + using, + + ``` sl + /usr/sbin/unsquashfs destination.squash + ``` + + You can do many other things with SquashFS archives, like quickly + list the files in the archive, extract some but not all of the + contents, and so on. See `man unsquashfs` for more details. + +- **Tarball (usable, but SquashFS is recommended)** + Many files can be compressed into a single 'tarball'  + + ``` sl + tar -czf name.tar /path/containing/files/ + ``` + + Then when files need to be accessed again they can be un-tarred + using, + + ``` sl + tar -xzf tarname.tar + ``` + +- **Contact Support** + If you are following the recommendations here yet are still + concerned about inodes or disk space, open a [support + ticket](https://support.nesi.org.nz/hc/en-gb/requests/new) and we + can raise the limit for you. \ No newline at end of file diff --git a/docs/General/FAQs/Logging_in_to_the_HPCs.md b/docs/General/FAQs/Logging_in_to_the_HPCs.md index 2a95ae882..7e9ffa3c4 100644 --- a/docs/General/FAQs/Logging_in_to_the_HPCs.md +++ b/docs/General/FAQs/Logging_in_to_the_HPCs.md @@ -27,15 +27,15 @@ This page has been replaced. Information on how to log in can now be found at: - [Setting Up and Resetting Your -Password](https://support.nesi.org.nz/hc/en-gb/articles/360000335995) + Password](https://support.nesi.org.nz/hc/en-gb/articles/360000335995) - [Setting Up Two-Factor -Authentication](https://support.nesi.org.nz/hc/en-gb/articles/360000203075) + Authentication](https://support.nesi.org.nz/hc/en-gb/articles/360000203075) - [Choosing and Configuring Software for Connecting to the -Clusters](https://support.nesi.org.nz/hc/en-gb/articles/360001016335) + Clusters](https://support.nesi.org.nz/hc/en-gb/articles/360001016335) - [Standard Terminal -Setup](https://support.nesi.org.nz/hc/en-gb/articles/360000625535) + Setup](https://support.nesi.org.nz/hc/en-gb/articles/360000625535) - [Login -Troubleshooting](https://support.nesi.org.nz/hc/en-gb/articles/360000570215) + Troubleshooting](https://support.nesi.org.nz/hc/en-gb/articles/360000570215) Please update your links and bookmarks accordingly. If you have a specific question that is not answered on the pages above or elsewhere diff --git a/docs/General/FAQs/Login_Troubleshooting.md b/docs/General/FAQs/Login_Troubleshooting.md index 597a8bb43..24ede7ec8 100644 --- a/docs/General/FAQs/Login_Troubleshooting.md +++ b/docs/General/FAQs/Login_Troubleshooting.md @@ -20,20 +20,20 @@ zendesk_section_id: 360000039036 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Requirements -Please make sure you have followed the recommended setup. See -[Choosing and Configuring Software for Connecting to the -Clusters](https://support.nesi.org.nz/hc/en-gb/articles/360001016335) -for more information. + Please make sure you have followed the recommended setup. See + [Choosing and Configuring Software for Connecting to the + Clusters](https://support.nesi.org.nz/hc/en-gb/articles/360001016335) + for more information. !!! prerequisite New Command Line Users -- Most terminals do not give an indication of how many characters -have been typed when entering a password. -- Paste is not usually bound to `ctrl` + `V` and will vary based on -your method of access. + - Most terminals do not give an indication of how many characters + have been typed when entering a password. + - Paste is not usually bound to `ctrl` + `V` and will vary based on + your method of access. ## Repeatedly asking for First and Second Factor. In addition to using an incorrect First/Second factor there are several -other issues that will cause a similar looking failure to log in. +other issues that will cause a similar looking failure to log in.  ``` sl Login Password: @@ -44,11 +44,11 @@ Login Password: OR ``` sl -Login Password (First Factor): +Login Password (First Factor): Authenticator Code (Second Factor): -Login Password (First Factor): +Login Password (First Factor): Authenticator Code (Second Factor): -Login Password (First Factor): +Login Password (First Factor): Authenticator Code (Second Factor): ``` @@ -84,7 +84,7 @@ correctly](https://support.nesi.org.nz/hc/en-gb/articles/360000161315#recMoba). If you are not a member of an active project, or your project has no active allocation, you will not be able to log in. You should be able to find whether you have any active projects with active -allocations [here](https://my.nesi.org.nz/html/view_projects). +allocations [here](https://my.nesi.org.nz/html/view_projects).  #### 3. Confirm you are using the correct username and password @@ -119,7 +119,7 @@ people have multiple tokens and occasionally mix them up. Six failed login attempts within five minutes will trigger a four-hour lockout. Users experiencing login issues can inadvertently trigger the -lockout, making diagnosing the original issue much more difficult. +lockout, making diagnosing the original issue much more difficult.   ## Connection closed by .... (MobaXterm) @@ -137,11 +137,11 @@ input before pressing Enter will cause the login to fail. The expected processes is as follows: ``` sl -ssh @lander.nesi.org.nz +ssh @lander.nesi.org.nz @lander.nesi.org.nz's password: @lander.nesi.org.nz's password: @lander.nesi.org.nz's password: -Login Password (First Factor): +Login Password (First Factor): Authenticator Code (Second Factor): ``` @@ -149,7 +149,7 @@ Authenticator Code (Second Factor): #### 2. Update your MobaXTerm client. -Occasionally an outdated client can cause errors. +Occasionally an outdated client can cause errors. MobaXterm can be updated through: 'help>check for updates' #### 3. Reinstall your MobaXTerm client. @@ -172,27 +172,27 @@ to reset your token though [my.nesi.org.nz](https://my.nesi.org.nz/). Helpful things to include: - The client you are using (WSL, MobaXterm, Mac terminal, Linux, -etc.). + etc.). - The nature of the problem, including the precise text of any error -message you have been receiving. -- Did you start out having one login problem and are now getting a -different one? If so, when did the change happen, and were you -doing anything in particular related to logging in at the time -things changed? + message you have been receiving. + - Did you start out having one login problem and are now getting a + different one? If so, when did the change happen, and were you + doing anything in particular related to logging in at the time + things changed? - Have you successfully logged in in the past? if so when was the last -time you successfully logged in, and to what NeSI cluster? + time you successfully logged in, and to what NeSI cluster? - Has anything administrative and relevant to NeSI access changed -since you last logged in? For example: -- Have you opened or joined any new NeSI projects, or have any of -your existing NeSI projects closed? -- Have any of your NeSI projects been granted new allocations, had -a previously granted new allocation actually start, or had an -existing allocation modified? -- Have any of your NeSI projects' existing allocations ended? -- Have any of your NeSI projects had a disk space quota change? -- Have you changed your institutional username and password, moved -to a different institution, or started a new job at an -institution while also keeping your position at your old -institution? Might NeSI know about any of these changes? + since you last logged in? For example: + - Have you opened or joined any new NeSI projects, or have any of + your existing NeSI projects closed? + - Have any of your NeSI projects been granted new allocations, had + a previously granted new allocation actually start, or had an + existing allocation modified? + - Have any of your NeSI projects' existing allocations ended? + - Have any of your NeSI projects had a disk space quota change? + - Have you changed your institutional username and password, moved + to a different institution, or started a new job at an + institution while also keeping your position at your old + institution? Might NeSI know about any of these changes? - What have you tried so far? - Are you on the NIWA network, the NIWA VPN, or neither? \ No newline at end of file diff --git a/docs/General/FAQs/Password_Expiry.md b/docs/General/FAQs/Password_Expiry.md index 5039367f0..3d9cbf5ee 100644 --- a/docs/General/FAQs/Password_Expiry.md +++ b/docs/General/FAQs/Password_Expiry.md @@ -24,9 +24,9 @@ that happens is ``` sl Password expired. Change your password now. -First Factor (Current Password): -Second Factor (optional): -Login Password: +First Factor (Current Password): +Second Factor (optional): +Login Password: ``` however passwords can not be reset this way, instead you should [reset diff --git a/docs/General/FAQs/Two_Factor_Authentication_FAQ.md b/docs/General/FAQs/Two_Factor_Authentication_FAQ.md index e55f4c08a..1afd43025 100644 --- a/docs/General/FAQs/Two_Factor_Authentication_FAQ.md +++ b/docs/General/FAQs/Two_Factor_Authentication_FAQ.md @@ -52,9 +52,9 @@ original device tap the three-dot menu icon followed by **Transfer accounts**, then **Export accounts**, select the accounts you want to keep and then press **Next**. If these options are not present then first update your Authenticator. On the new device press **Import -existing accounts** then scan the QR code provided on the old device. +existing accounts** then scan the QR code provided on the old device.  -## How do I get a new Second Factor? +## How do I get a new Second Factor? **Answer:** See article [here](https://support.nesi.org.nz/hc/en-gb/articles/360000684635-How-to-replace-my-2FA-token). diff --git a/docs/General/FAQs/What_are_my-bashrc_and-bash_profile_for.md b/docs/General/FAQs/What_are_my-bashrc_and-bash_profile_for.md index e366a08df..c73ed7b43 100644 --- a/docs/General/FAQs/What_are_my-bashrc_and-bash_profile_for.md +++ b/docs/General/FAQs/What_are_my-bashrc_and-bash_profile_for.md @@ -25,12 +25,12 @@ your *shell*, the program that interprets and executes the commands that you type in at your command prompt. But they're somewhat confusing, because there are several, and it's not obvious which are read and when. !!! prerequisite Warning -This documentation is specific to the *bash* shell, which is our -chosen default shell for all users, and is the default for most Linux -machines. If you have chosen a different default shell, or have -started another shell manually on the command line, these notes will -apply with modifications, or not at all; please consult the -documentation for your shell. + This documentation is specific to the *bash* shell, which is our + chosen default shell for all users, and is the default for most Linux + machines. If you have chosen a different default shell, or have + started another shell manually on the command line, these notes will + apply with modifications, or not at all; please consult the + documentation for your shell. ## `~/.bashrc` @@ -86,7 +86,7 @@ useful rules of thumb: - Functions and aliases go in `~/.bashrc` - Modifications to `PATH` and `LD_LIBRARY_PATH` go in -`~/.bash_profile` + `~/.bash_profile` These are guidelines only and are subject to your specific working practices and how you expect your shells to behave. diff --git a/docs/General/FAQs/What_does_oom_kill_mean.md b/docs/General/FAQs/What_does_oom_kill_mean.md index 7041683e0..51f23e3b6 100644 --- a/docs/General/FAQs/What_does_oom_kill_mean.md +++ b/docs/General/FAQs/What_does_oom_kill_mean.md @@ -27,17 +27,17 @@ slurmstepd: error: Detected 1 oom-kill event(s) in step 370626.batch cgroup ``` indicates that your job attempted to use more memory (RAM) than Slurm -reserved for it. +reserved for it.   OOM events can happen even without Slurm's `sacct` command reporting such a high memory usage, for two reasons: - Unlike the enforcement via cgroups, Slurm's accounting system only -records usage every 30 seconds, so sudden spikes in memory usage may -not be recorded, but can still trigger the OOM killer; + records usage every 30 seconds, so sudden spikes in memory usage may + not be recorded, but can still trigger the OOM killer; - Slurm's accounting system also does not include any temporary files -the job may have put in the memory-based `/tmp` or `$TMPDIR` -filesystems. + the job may have put in the memory-based `/tmp` or `$TMPDIR` + filesystems. If you see an OOM event, you have two options. The easier option is to request more memory by increasing the value of the `--mem` argument in diff --git a/docs/General/FAQs/What_software_environments_on_NeSI_are_optimised_for_Machine_Learning_and_data_science.md b/docs/General/FAQs/What_software_environments_on_NeSI_are_optimised_for_Machine_Learning_and_data_science.md index 5a0cbafe8..aac1129cf 100644 --- a/docs/General/FAQs/What_software_environments_on_NeSI_are_optimised_for_Machine_Learning_and_data_science.md +++ b/docs/General/FAQs/What_software_environments_on_NeSI_are_optimised_for_Machine_Learning_and_data_science.md @@ -24,28 +24,28 @@ When using NeSI's [HPC platform](https://support.nesi.org.nz/hc/en-gb/sections/360000034335), you can bring your own code to install or you can access our extensive software library which is already built and compiled, ready for you to -use. +use.  Examples of software environments on NeSI optimised for data science include: - [R](https://support.nesi.org.nz/hc/en-gb/articles/209338087-R) and [Python](https://support.nesi.org.nz/hc/en-gb/articles/360000990436) users -can get right into using and exploring the several built-in packages -or create custom code. + can get right into using and exploring the several built-in packages + or create custom code. - [Jupyter on NeSI -](https://support.nesi.org.nz/hc/en-gb/articles/360001555615-Jupyter-on-NeSI)is -particularly well suited to artificial intelligence and machine -learning workloads. [R -Studio](https://support.nesi.org.nz/hc/en-gb/articles/360004337836) -and/or Conda can be accessed via Jupyter. + ](https://support.nesi.org.nz/hc/en-gb/articles/360001555615-Jupyter-on-NeSI)is + particularly well suited to artificial intelligence and machine + learning workloads. [R + Studio](https://support.nesi.org.nz/hc/en-gb/articles/360004337836) + and/or Conda can be accessed via Jupyter. - Commonly used data science environments and libraries such as -[Keras](https://support.nesi.org.nz/hc/en-gb/articles/360001075936-Keras), -[LambdaStack](https://support.nesi.org.nz/hc/en-gb/articles/360002558216-Lambda-Stack), -[Tensorflow](https://support.nesi.org.nz/hc/en-gb/articles/360000990436) -and [Conda](https://docs.conda.io/en/latest/) are available to -create comprehensive workflows. + [Keras](https://support.nesi.org.nz/hc/en-gb/articles/360001075936-Keras), + [LambdaStack](https://support.nesi.org.nz/hc/en-gb/articles/360002558216-Lambda-Stack), + [Tensorflow](https://support.nesi.org.nz/hc/en-gb/articles/360000990436) + and [Conda](https://docs.conda.io/en/latest/) are available to + create comprehensive workflows. For more information about available software and applications, you can [browse our catalogue @@ -53,14 +53,14 @@ here](https://support.nesi.org.nz/hc/en-gb/sections/360000040076). As pictured in the screenshot below, you can type keywords into the catalogue's search field to browse by a specific software name or using -more broad terms such as "machine learning". +more broad terms such as "machine learning".  ![MachineLearningSoftwareEnvironments-May2021.png](../../assets/images/What_software_environments_on_NeSI_are_optimised_for_Machine_Learning_and_data_science.png) For more information on NeSI's model and approach to application support, refer to our [policy for the management of scientific application -software](https://support.nesi.org.nz/hc/en-gb/articles/360000170355). +software](https://support.nesi.org.nz/hc/en-gb/articles/360000170355).  If you need help installing your software or would like to discuss your software needs with us, [contact NeSI diff --git a/docs/General/FAQs/Where_should_I_store_my_data_on_NeSI_systems.md b/docs/General/FAQs/Where_should_I_store_my_data_on_NeSI_systems.md index 4aa4d7458..02fec1309 100644 --- a/docs/General/FAQs/Where_should_I_store_my_data_on_NeSI_systems.md +++ b/docs/General/FAQs/Where_should_I_store_my_data_on_NeSI_systems.md @@ -19,7 +19,7 @@ zendesk_section_id: 360000039036 [//]: <> (^^^^^^^^^^^^^^^^^^^^) [//]: <> (REMOVE ME IF PAGE VALIDATED) - +  In general, the **project directory** should be used for reference data, tools, and job submission and management scripts. The **nobackup @@ -30,5 +30,6 @@ used to build and edit code, provided that the code is under version control and changes are regularly checked into upstream revision control systems. The **long-term storage service** should be used for larger datasets that you only access occasionally and do not need to change in -situ. +situ.  +  \ No newline at end of file diff --git a/docs/General/FAQs/Why_cant_I_log_in_using_MobaXTerm.md b/docs/General/FAQs/Why_cant_I_log_in_using_MobaXTerm.md index a178bbada..70b0fa224 100644 --- a/docs/General/FAQs/Why_cant_I_log_in_using_MobaXTerm.md +++ b/docs/General/FAQs/Why_cant_I_log_in_using_MobaXTerm.md @@ -45,14 +45,14 @@ to fail. The expected procedure is as follows. ``` sl ssh @lander.nesi.org.nz -@lander.nesi.org.nz's password: -@lander.nesi.org.nz's password: +@lander.nesi.org.nz's password: +@lander.nesi.org.nz's password: @lander.nesi.org.nz's password: Login Password (First Factor): Authenticator Code (Second Factor): ``` - +  ## Delete Saved Credentials @@ -60,14 +60,14 @@ It's possible that, even with a fresh install of mobaXterm it is still trying to use your old password from credential manager. 1. Go to Settings->Configuration and go to the General tab and click -on MobaXterm password management + on MobaXterm password management 2. You should see the credentials for NeSI hosts (`lander`, `mahuika`, -`maui`) + `maui`) 3. Remove all entries. 4. Restart MobaXterm 5. Try logging in again - +  For more information about how to log in to our HPC facilities, please see [this diff --git a/docs/General/FAQs/Why_is_my_job_taking_a_long_time_to_start.md b/docs/General/FAQs/Why_is_my_job_taking_a_long_time_to_start.md index 6844d592f..242f258c5 100644 --- a/docs/General/FAQs/Why_is_my_job_taking_a_long_time_to_start.md +++ b/docs/General/FAQs/Why_is_my_job_taking_a_long_time_to_start.md @@ -24,16 +24,16 @@ there are several possible causes. - [Scheduled maintenance](#scheduled-maintenance) - [Delays in the queue](#delays-in-the-queue) -- [Your job is being beaten by other high-priority -jobs](#other-high-priority-jobs) -- [Your project has a low Fair Share -score](#low-fair-share-score) -- [Your project has a high Fair Share score, but there are -lots of other jobs from similarly high-priority -projects](#queue-congestion) -- [Your job's resource demands are hard to -satisfy](#difficult-job) -- [Some other problem](#other-problem) + - [Your job is being beaten by other high-priority + jobs](#other-high-priority-jobs) + - [Your project has a low Fair Share + score](#low-fair-share-score) + - [Your project has a high Fair Share score, but there are + lots of other jobs from similarly high-priority + projects](#queue-congestion) + - [Your job's resource demands are hard to + satisfy](#difficult-job) + - [Some other problem](#other-problem) ## Scheduled maintenance @@ -56,7 +56,7 @@ This command will, for each of your queued jobs, produce an output looking something like this: ``` sl -$ nn_my_queued_jobs +$ nn_my_queued_jobs ACCOUNT JOBID NAME SUBMIT_TIME QOS NODE CPUS MIN_MEMORY PRIORITY START_TIME REASON nesi99999 12345678 SomeRandomJob 2019-01-01T12:00:00 collab 1 8 2G 1553 N/A QOSMaxCpuPerJobLimit ``` @@ -67,15 +67,15 @@ delayed. Common answers include "Priority", "Resources", "Dependency", "ReqNodeNotAvail", and others. - **Priority** means that the job just isn't in the front of the queue -yet. + yet. - **Resources** means that there are not currently enough free -resources to run the job. + resources to run the job. - **Dependency** means the job is in some way dependent on another, -and the other job (the dependency) has not yet reached the required -state. + and the other job (the dependency) has not yet reached the required + state. - **ReqNodeNotAvail** means that the job has requested some specific -node that is busy working on other jobs, is out of service, or does -not exist. + node that is busy working on other jobs, is out of service, or does + not exist. A more comprehensive list of job reason codes is available [here](https://slurm.schedmd.com/squeue.html#lbAF) (offsite). As noted @@ -105,17 +105,17 @@ to lowest. The output should look something like this: ``` sl -JOBID PARTITION PRIORITY AGE FAIRSHARE JOBSIZE QOS -793492 gpu 1553 504 1000 20 30 -2008465 long 1107 336 723 18 30 -2039471 long 1083 312 723 18 30 -2039456 long 1083 312 723 18 30 -2039452 long 1083 312 723 18 30 -2039435 long 1083 312 723 18 30 -2039399 long 1083 312 723 18 30 -2039391 long 1083 312 723 18 30 -2039376 long 1083 312 723 18 30 -2039371 long 1083 312 723 18 30 + JOBID PARTITION PRIORITY AGE FAIRSHARE JOBSIZE QOS + 793492 gpu 1553 504 1000 20 30 + 2008465 long 1107 336 723 18 30 + 2039471 long 1083 312 723 18 30 + 2039456 long 1083 312 723 18 30 + 2039452 long 1083 312 723 18 30 + 2039435 long 1083 312 723 18 30 + 2039399 long 1083 312 723 18 30 + 2039391 long 1083 312 723 18 30 + 2039376 long 1083 312 723 18 30 + 2039371 long 1083 312 723 18 30 ... ``` @@ -168,7 +168,7 @@ it has succeeded, so you can check its effect using `scontrol show`: ``` sl $ scontrol show job 12345678 | grep TimeLimit -RunTime=00:00:00 TimeLimit=00:01:00 TimeMin=N/A + RunTime=00:00:00 TimeLimit=00:01:00 TimeMin=N/A ``` Note that you can not yourself use `scontrol` to increase a job's diff --git a/docs/General/NeSI_Policies/Access_Policy.md b/docs/General/NeSI_Policies/Access_Policy.md index bb89cc519..16263bb26 100644 --- a/docs/General/NeSI_Policies/Access_Policy.md +++ b/docs/General/NeSI_Policies/Access_Policy.md @@ -27,12 +27,12 @@ Our Access Policy provides essential information for researchers accessing the following NeSI services: - HPC Compute and Analytics – provides access to [HPC -platforms](https://support.nesi.org.nz/hc/en-gb/sections/360000034335-The-NeSI-High-Performance-Computers) -that host a broad range of high-performance [software applications -and -libraries](https://www.nesi.org.nz/services/high-performance-computing/software). + platforms](https://support.nesi.org.nz/hc/en-gb/sections/360000034335-The-NeSI-High-Performance-Computers) + that host a broad range of high-performance [software applications + and + libraries](https://www.nesi.org.nz/services/high-performance-computing/software). - Consultancy and Training – provides access to [expert scientific -software programmers](https://www.nesi.org.nz/about-us/who-we-are) -and [training -workshops](https://www.nesi.org.nz/services/computational-science-team/workshops) -respectively. \ No newline at end of file + software programmers](https://www.nesi.org.nz/about-us/who-we-are) + and [training + workshops](https://www.nesi.org.nz/services/computational-science-team/workshops) + respectively. \ No newline at end of file diff --git a/docs/General/NeSI_Policies/Account_Requests_for_non_Tuakiri_Members.md b/docs/General/NeSI_Policies/Account_Requests_for_non_Tuakiri_Members.md index 0ece3f6ca..df9aeb9e1 100644 --- a/docs/General/NeSI_Policies/Account_Requests_for_non_Tuakiri_Members.md +++ b/docs/General/NeSI_Policies/Account_Requests_for_non_Tuakiri_Members.md @@ -34,9 +34,9 @@ federation, you can request access via ![mceclip0.png](../../assets/images/Account_Requests_for_non_Tuakiri_Members.png) !!! prerequisite Warning -The email address you use on your application must be your -institutional email address. We do not accept applications using -personal email addresses. + The email address you use on your application must be your + institutional email address. We do not accept applications using + personal email addresses. We will review your request and, if we approve it, we will create a Tuakiri Virtual Home account for you, which you can use to login to @@ -45,16 +45,16 @@ an automatically generated email inviting you to activate your account. You will need to activate your account before you can log in to my.nesi.org.nz. !!! prerequisite What if I don't get the account activation email? -Some organisations' email servers are known to block Tuakiri's account -activation emails. If you haven't received your Tuakiri account -activation email by the end of the next business day after you applied -for an account, please check your junk mail and/or quarantine folders. -If you still can't find the email, [contact our support -team](https://support.nesi.org.nz/hc/requests/new). + Some organisations' email servers are known to block Tuakiri's account + activation emails. If you haven't received your Tuakiri account + activation email by the end of the next business day after you applied + for an account, please check your junk mail and/or quarantine folders. + If you still can't find the email, [contact our support + team](https://support.nesi.org.nz/hc/requests/new). !!! prerequisite What next? -- [Project -Eligibility](https://support.nesi.org.nz/hc/en-gb/articles/360000925176-Project-Eligibility) -- [Applying for a new -project.](https://support.nesi.org.nz/hc/en-gb/articles/360000174976-Applying-for-a-NeSI-project) -- [Applying to join an existing -project](https://support.nesi.org.nz/hc/en-gb/articles/360000693896). \ No newline at end of file + - [Project + Eligibility](https://support.nesi.org.nz/hc/en-gb/articles/360000925176-Project-Eligibility) + - [Applying for a new + project.](https://support.nesi.org.nz/hc/en-gb/articles/360000174976-Applying-for-a-NeSI-project) + - [Applying to join an existing + project](https://support.nesi.org.nz/hc/en-gb/articles/360000693896). \ No newline at end of file diff --git a/docs/General/NeSI_Policies/How_we_review_applications.md b/docs/General/NeSI_Policies/How_we_review_applications.md index f4fb0e354..bde5e4d7c 100644 --- a/docs/General/NeSI_Policies/How_we_review_applications.md +++ b/docs/General/NeSI_Policies/How_we_review_applications.md @@ -25,85 +25,85 @@ technical support team for review. In general, our review process for new projects is as follows: 1. **Initial check:** We see whether your proposal describes a -legitimate research programme and whether your research programme -will need some kind of advanced research computing capability (which -may or may not be high-performance computing). We also check whether -your project team is all assembled and has the skills needed to -start using our systems (for example, basic familiarity with the -Linux command line). -If you are a NIWA researcher, we will also confirm with NIWA's -institutional point of contact that you have followed the [NIWA -internal documentation for gaining access to the -HPCs](https://one.niwa.co.nz/display/ONE/High+Performance+Computing+Facility+Services). -You will only be able to access the NIWA internal documentation if -you are currently behind the NIWA VPN or on NIWA's internal network. + legitimate research programme and whether your research programme + will need some kind of advanced research computing capability (which + may or may not be high-performance computing). We also check whether + your project team is all assembled and has the skills needed to + start using our systems (for example, basic familiarity with the + Linux command line). + If you are a NIWA researcher, we will also confirm with NIWA's + institutional point of contact that you have followed the [NIWA + internal documentation for gaining access to the + HPCs](https://one.niwa.co.nz/display/ONE/High+Performance+Computing+Facility+Services). + You will only be able to access the NIWA internal documentation if + you are currently behind the NIWA VPN or on NIWA's internal network. 2. **Software check:** One of our technical experts looks at the -software you say you want to use and determines whether it can run -on any of our systems and whether you are likely to be legally -allowed to run the software on NeSI. This check is intended to cover -both compatibility and licensing matters, as well as whether you are -able and willing to compile or install the software yourself if -necessary. + software you say you want to use and determines whether it can run + on any of our systems and whether you are likely to be legally + allowed to run the software on NeSI. This check is intended to cover + both compatibility and licensing matters, as well as whether you are + able and willing to compile or install the software yourself if + necessary. 3. **Support check:** Some research programmes may have very demanding -support needs. We will consider whether we are able to offer the -kind and amount of support your team is likely to need to progress -your research if we approve it. This check is especially important -if we think you are likely to want or need to change someone else's -code. We may consult with our scientific programmers at this point, -and find out whether your project is likely to be eligible for our -consultancy service. + support needs. We will consider whether we are able to offer the + kind and amount of support your team is likely to need to progress + your research if we approve it. This check is especially important + if we think you are likely to want or need to change someone else's + code. We may consult with our scientific programmers at this point, + and find out whether your project is likely to be eligible for our + consultancy service. 4. **Disk space check:** We decide how much disk space your project is -likely to need in the persistent storage (project directory) and -scratch storage (nobackup directory). We may unfortunately have to -reject (or negotiate for less storage) if your disk space needs -would interfere with our ability to provide good service to other -research teams. + likely to need in the persistent storage (project directory) and + scratch storage (nobackup directory). We may unfortunately have to + reject (or negotiate for less storage) if your disk space needs + would interfere with our ability to provide good service to other + research teams. 5. **Facility:** Based on the information in your application, we -decide whether your workflow is best suited for Mahuika, Māui or -both, and also whether your project would benefit from an allocation -of GPU hours or access to ancillary nodes or virtual labs. + decide whether your workflow is best suited for Mahuika, Māui or + both, and also whether your project would benefit from an allocation + of GPU hours or access to ancillary nodes or virtual labs. 6. **Decision and notification:** If we approve an initial allocation -for your project, we will typically award the project an [allocation -of Mahuika compute units, Māui node hours, or both, and also an -online storage -allocation](https://support.nesi.org.nz/hc/en-gb/articles/360001385735), -from one of [our allocation -classes](https://support.nesi.org.nz/hc/en-gb/articles/360000925176). -In any case, we will send you an email telling you about our -decision. + for your project, we will typically award the project an [allocation + of Mahuika compute units, Māui node hours, or both, and also an + online storage + allocation](https://support.nesi.org.nz/hc/en-gb/articles/360001385735), + from one of [our allocation + classes](https://support.nesi.org.nz/hc/en-gb/articles/360000925176). + In any case, we will send you an email telling you about our + decision. Our review process for requests for new allocations on existing projects is simpler: 1. **Eligibility check:** We look at the information you have given us -(and may ask you more questions) to find out which of our regular -allocation classes -([Merit](https://support.nesi.org.nz/hc/en-gb/articles/360000925176-Project-Eligibility-Classes#merit), -[Postgraduate](https://support.nesi.org.nz/hc/en-gb/articles/360000925176-Project-Eligibility-Classes#postgrad) -or -[Institutional](https://support.nesi.org.nz/hc/en-gb/articles/360000925176-Project-Eligibility-Classes#institutional)) -this research programme is eligible to receive. Your research -programme may be eligible for more than one allocation class. + (and may ask you more questions) to find out which of our regular + allocation classes + ([Merit](https://support.nesi.org.nz/hc/en-gb/articles/360000925176-Project-Eligibility-Classes#merit), + [Postgraduate](https://support.nesi.org.nz/hc/en-gb/articles/360000925176-Project-Eligibility-Classes#postgrad) + or + [Institutional](https://support.nesi.org.nz/hc/en-gb/articles/360000925176-Project-Eligibility-Classes#institutional)) + this research programme is eligible to receive. Your research + programme may be eligible for more than one allocation class. 2. **Amount and duration:** We will calculate the approximate amount of -compute resources you are likely to need based on what kind of -allocation you most recently received and your usage history. We may -suggest an allocation size (i.e. a number of Mahuika compute units -or Māui node hours) and a duration of up to 12 months, and give you -a chance to provide feedback if you think our suggested allocation -would not meet your needs. + compute resources you are likely to need based on what kind of + allocation you most recently received and your usage history. We may + suggest an allocation size (i.e. a number of Mahuika compute units + or Māui node hours) and a duration of up to 12 months, and give you + a chance to provide feedback if you think our suggested allocation + would not meet your needs. 3. **Choice of Class and Contention:** We will choose from which class -to award your allocation, based on your research programme's -eligibility for the different classes and whether your proposed -allocation would exceed [any class-based allocation -limits](https://support.nesi.org.nz/hc/en-gb/articles/360000925176-Project-Eligibility-Classes). -We may change this choice depending on which classes, if any, are -under contention. + to award your allocation, based on your research programme's + eligibility for the different classes and whether your proposed + allocation would exceed [any class-based allocation + limits](https://support.nesi.org.nz/hc/en-gb/articles/360000925176-Project-Eligibility-Classes). + We may change this choice depending on which classes, if any, are + under contention. 4. **Approval:** If we decide that your project should be considered -for an Institutional allocation, the request may need to be approved -by a representative of the project's host institution, which is the -institution where the project owner works or studies. + for an Institutional allocation, the request may need to be approved + by a representative of the project's host institution, which is the + institution where the project owner works or studies. 5. **Decision and notification:** We will send you an email telling you -about our decision. + about our decision. From time to time we may have to decline requests for allocations of computing resources. If we can't grant your research programme an diff --git a/docs/General/NeSI_Policies/Merit_allocations.md b/docs/General/NeSI_Policies/Merit_allocations.md index 6f4cda888..79455fb43 100644 --- a/docs/General/NeSI_Policies/Merit_allocations.md +++ b/docs/General/NeSI_Policies/Merit_allocations.md @@ -34,34 +34,34 @@ To be eligible for consideration for a Merit allocation, the application must meet the following criteria: - The underpinning research programme (that requires access to NeSI -HPC services to achieve the objectives of the research) must support -the [Government’s Science -Goals](https://www.mbie.govt.nz/science-and-technology/science-and-innovation/funding-information-and-opportunities/national-statement-of-science-investment/). + HPC services to achieve the objectives of the research) must support + the [Government’s Science + Goals](https://www.mbie.govt.nz/science-and-technology/science-and-innovation/funding-information-and-opportunities/national-statement-of-science-investment/). - To demonstrate research quality and alignment with national research -priorities, the research funding must have come from a -peer-reviewed, contestable process at an institutional, regional or -national level. -- The following funding sources are likely to qualify: -- Research funds managed by the Ministry of Business, -Innovation and Employment (MBIE) -- Health Research Council funding -- Royal Society of New Zealand funding, e.g. Marsden grants -- SSIF programme funding (previously known as CRI Core -funding) -- Research programmes forming part of a National Science -Challenge -- Research programmes forming part of a Centre of Research -Excellence (CoRE) -- Other similar funding sources -- The following funding sources are unlikely to qualify: -- Privately funded research -- Research funded by a foreign government + priorities, the research funding must have come from a + peer-reviewed, contestable process at an institutional, regional or + national level. + - The following funding sources are likely to qualify: + - Research funds managed by the Ministry of Business, + Innovation and Employment (MBIE) + - Health Research Council funding + - Royal Society of New Zealand funding, e.g. Marsden grants + - SSIF programme funding (previously known as CRI Core + funding) + - Research programmes forming part of a National Science + Challenge + - Research programmes forming part of a Centre of Research + Excellence (CoRE) + - Other similar funding sources + - The following funding sources are unlikely to qualify: + - Privately funded research + - Research funded by a foreign government - The research grant or contract must cover the entire period for -which an allocation of NeSI resources is sought. + which an allocation of NeSI resources is sought. - The applicant must be a named investigator on the peer reviewed -research grant or contract. If you are a student, we may at our -discretion consider your application for a Merit award if your -supervisor is a named investigator. + research grant or contract. If you are a student, we may at our + discretion consider your application for a Merit award if your + supervisor is a named investigator. Read more about [how we review applications](https://support.nesi.org.nz/hc/en-gb/articles/360000202136). @@ -70,3 +70,4 @@ To learn more about NeSI Projects or to apply for a new project, please read our article [Applying for a NeSI Project](https://support.nesi.org.nz/hc/articles/360000174976). +  \ No newline at end of file diff --git a/docs/General/NeSI_Policies/NeSI_Application_Support_Model.md b/docs/General/NeSI_Policies/NeSI_Application_Support_Model.md index 30d16f9fb..1f019c439 100644 --- a/docs/General/NeSI_Policies/NeSI_Application_Support_Model.md +++ b/docs/General/NeSI_Policies/NeSI_Application_Support_Model.md @@ -26,13 +26,13 @@ The NeSI policy for management of scientific application software is based on the following principles: - NeSI will install and maintain software in a central location if it -will be useful to a number of users, or if the effort to install it -is small. + will be useful to a number of users, or if the effort to install it + is small. - Users may install software in their `/home` or `/nesi/project/` -directories, provided that they have a license for the software (if -needed) that permits the software to be used on NeSI systems. + directories, provided that they have a license for the software (if + needed) that permits the software to be used on NeSI systems. - NeSI will provide users with a reasonable amount of help when they -are installing their own software. + are installing their own software. In more detail: Application software will be supported using a three-tier model. @@ -43,17 +43,17 @@ Includes applications (meaning tools, libraries and science applications) which: 1. Have a wide user base among users of the NeSI Compute and Analytics -Service, either because they are used by many users within one or -many Projects or are easy to install. + Service, either because they are used by many users within one or + many Projects or are easy to install. 2. Are centrally installed, tested (including scaling), optimised, -documented and upgraded as new versions become available. + documented and upgraded as new versions become available. 3. The NeSI Applications and/or Computational Science Team staff often -(but not always) have in-depth knowledge of the application science -– in which case they can provide specialist support to researchers. -However, if no in-depth knowledge is available in the team(s), this -does not prevent an application from being in Tier 1. + (but not always) have in-depth knowledge of the application science + – in which case they can provide specialist support to researchers. + However, if no in-depth knowledge is available in the team(s), this + does not prevent an application from being in Tier 1. 4. Support documentation will be provided, including licensing -information and scaling data etc. + information and scaling data etc. ## Tier 2 @@ -61,14 +61,14 @@ Includes applications (meaning tools, libraries and science applications) which: 1. Have a small but important user base, meaning they are used by -several users and 1 or several projects. + several users and 1 or several projects. 2. Are centrally installed, standard regression tests are applied (if -provided/available) and will be upgraded upon user request and as -time permits. + provided/available) and will be upgraded upon user request and as + time permits. 3. NeSI Applications and/or Computational Science Team staff have no -“in depth” knowledge of the application. + “in depth” knowledge of the application. 4. The support documentation provides basic information on how to use -the software. + the software. ## Tier 3 @@ -77,7 +77,7 @@ applications) that are required by one user, or have very limited use, in which case: 1. NeSI will (if required) provide limited guidance to the user so that -they can install the software in their home directory. + they can install the software in their home directory. 2. The user will be responsible for managing this software 3. No support documentation will be provided by NeSI 4. The software will not be listed in the software catalogue @@ -85,6 +85,6 @@ they can install the software in their home directory. NeSI will publish the current list of software (including all versions), with links to the Support documentation. - +  \ No newline at end of file diff --git a/docs/General/NeSI_Policies/NeSI_Licence_Policy.md b/docs/General/NeSI_Policies/NeSI_Licence_Policy.md index 6e6d9cf41..ec448f28c 100644 --- a/docs/General/NeSI_Policies/NeSI_Licence_Policy.md +++ b/docs/General/NeSI_Policies/NeSI_Licence_Policy.md @@ -24,8 +24,8 @@ own. If you wish to use any of the proprietary software installed on the NeSI cluster, you, or more likely your institution or department, will need to have an appropriate licence. !!! prerequisite Warning -Slurm and many other applications use the American spelling of the -noun, "*license*". + Slurm and many other applications use the American spelling of the + noun, "*license*". ## Licence Servers @@ -98,24 +98,24 @@ see a licence agreement allowing that person to use the software. We may also check to see whether the licence agreement forbids the person from using the software on NeSI. !!! prerequisite Warning -Some licence agreements are quite restrictive in terms of where, or on -what sort of machine, a licensee may run the program. For example, the -licence may require one or more of the following: -- The software may only be run on one computer (node) at a time. -- Any computer on which the software is run must be owned by the -user's employing institution, operated by employees of that -institution, or both. -- There may be other restrictions, like a limit to the number of -simultaneous tasks or threads you are permitted to run. -We may not have seen your licence agreement, and even if we have, -we're not intellectual property lawyers. Just because we grant you -access to a piece of software it doesn't necessarily mean you're -authorised to use it in the way you intend. **It is your -responsibility to ensure that your use of the software on NeSI -complies with the terms of your licence or is otherwise permitted by -law.** - -## Slurm Tokens + Some licence agreements are quite restrictive in terms of where, or on + what sort of machine, a licensee may run the program. For example, the + licence may require one or more of the following: + - The software may only be run on one computer (node) at a time. + - Any computer on which the software is run must be owned by the + user's employing institution, operated by employees of that + institution, or both. + - There may be other restrictions, like a limit to the number of + simultaneous tasks or threads you are permitted to run. + We may not have seen your licence agreement, and even if we have, + we're not intellectual property lawyers. Just because we grant you + access to a piece of software it doesn't necessarily mean you're + authorised to use it in the way you intend. **It is your + responsibility to ensure that your use of the software on NeSI + complies with the terms of your licence or is otherwise permitted by + law.** + +## Slurm Tokens  We encourage the use of Slurm licence tokens in your batch scripts, for example: @@ -136,6 +136,6 @@ likely leading to a timeout). The names of the Slurm licence tokens are included in the application-specific documentation. !!! prerequisite Note -Slurm licence reservations work independently of the licence server. -Not including a Slurm token will not prevent your job from running, -not will including one modify how your job runs (only *when* it runs). \ No newline at end of file + Slurm licence reservations work independently of the licence server. + Not including a Slurm token will not prevent your job from running, + not will including one modify how your job runs (only *when* it runs). \ No newline at end of file diff --git a/docs/General/NeSI_Policies/NeSI_Password_Policy.md b/docs/General/NeSI_Policies/NeSI_Password_Policy.md index 9ebbec821..f0dcf4ce0 100644 --- a/docs/General/NeSI_Policies/NeSI_Password_Policy.md +++ b/docs/General/NeSI_Policies/NeSI_Password_Policy.md @@ -23,13 +23,14 @@ The NeSI password policy is as follows: - Your password must be at least 12 characters long - Your password must contain one or more characters from at least two -of the following classes: -- uppercase letters -- lowercase letters -- numbers -- special characters (excluding '**&<>\\**') + of the following classes: + - uppercase letters + - lowercase letters + - numbers + - special characters (excluding '**&<>\\**') - Passwords expire after 2 years (730 days) - When resetting a password ensure that it is not similar to the -previous password(s) as that will cause the new password to be -rejected. + previous password(s) as that will cause the new password to be + rejected. +  \ No newline at end of file diff --git a/docs/General/NeSI_Policies/NeSI_Privacy_Policy.md b/docs/General/NeSI_Policies/NeSI_Privacy_Policy.md index 92d29a1b6..31c6a3890 100644 --- a/docs/General/NeSI_Policies/NeSI_Privacy_Policy.md +++ b/docs/General/NeSI_Policies/NeSI_Privacy_Policy.md @@ -20,8 +20,8 @@ zendesk_section_id: 360000224835 [//]: <> (REMOVE ME IF PAGE VALIDATED) See for the current -version of the NeSI Privacy Policy. - +version of the NeSI Privacy Policy. + The Policy outlines what personal information NeSI collects. How it is stored and used. How users can request access, correct or delete -information, and under what circumstances NeSI will disclose it. \ No newline at end of file +information, and under what circumstances NeSI will disclose it.  \ No newline at end of file diff --git a/docs/General/NeSI_Policies/Postgraduate_allocations.md b/docs/General/NeSI_Policies/Postgraduate_allocations.md index de19e0687..4cf44496e 100644 --- a/docs/General/NeSI_Policies/Postgraduate_allocations.md +++ b/docs/General/NeSI_Policies/Postgraduate_allocations.md @@ -28,24 +28,24 @@ To be considered for a Postgraduate allocation, your application must satisfy the following criteria: - You must be enrolled at a New Zealand degree-granting institution -and working on a postgraduate research programme (e.g. PhD or -Masters by research) approved by that institution. Applicants in -undergraduate programmes (including Honours programmes) or graduate -programmes based on coursework are not eligible. + and working on a postgraduate research programme (e.g. PhD or + Masters by research) approved by that institution. Applicants in + undergraduate programmes (including Honours programmes) or graduate + programmes based on coursework are not eligible. Even if your application satisfies these criteria, we may not award your project an allocation from the Postgraduate class: - If your institution is a NeSI Collaborator or Subscriber, your -project's allocation will most likely be made from your -institution's entitlement. + project's allocation will most likely be made from your + institution's entitlement. - If you have not used an HPC previously, we may award your project a -Proposal Development allocation first. In this case, your project -may be considered for a Postgraduate allocation after your Proposal -Development allocation is complete. + Proposal Development allocation first. In this case, your project + may be considered for a Postgraduate allocation after your Proposal + Development allocation is complete. - Some allocation requests may be declined, or alternatively postponed -until a later time, if there is insufficient computing capacity -available to meet demand. + until a later time, if there is insufficient computing capacity + available to meet demand. Read more about [how we review applications](https://support.nesi.org.nz/hc/en-gb/articles/360000202136). @@ -54,3 +54,4 @@ To learn more about NeSI Projects, and to apply please review the content of the section entitled [Applying for a NeSI Project](https://support.nesi.org.nz/hc/articles/360000174976). +  \ No newline at end of file diff --git a/docs/General/NeSI_Policies/Proposal_Development_allocations.md b/docs/General/NeSI_Policies/Proposal_Development_allocations.md index cd4816853..057c91da7 100644 --- a/docs/General/NeSI_Policies/Proposal_Development_allocations.md +++ b/docs/General/NeSI_Policies/Proposal_Development_allocations.md @@ -27,19 +27,19 @@ A Proposal Development allocation is a short-term allocation of up to Development allocation you can find out: - whether your software can run on a [NeSI -HPC](https://support.nesi.org.nz/hc/articles/360000175735), + HPC](https://support.nesi.org.nz/hc/articles/360000175735), - how your software scales to multiple cores or across compute nodes, - approximately how many compute units or node hours your research -project is likely to need. + project is likely to need. If: - you are new to NeSI or have not run this particular programme of -research on a NeSI system before, and + research on a NeSI system before, and - you work at a New Zealand institution that is not a NeSI -collaborating institution, and + collaborating institution, and - we decide that your workflow is likely to be a good technical fit -for our facilities, + for our facilities, it is likely that we will initially award your research programme a Proposal Development allocation. @@ -57,3 +57,4 @@ To learn more about NeSI Projects, and to apply please review the content of the section entitled [Applying for a NeSI Project](https://support.nesi.org.nz/hc/articles/360000174976). +  \ No newline at end of file diff --git a/docs/General/NeSI_Policies/Total_HPC_Resources_Available.md b/docs/General/NeSI_Policies/Total_HPC_Resources_Available.md index 8abf4bb28..4fe255fa9 100644 --- a/docs/General/NeSI_Policies/Total_HPC_Resources_Available.md +++ b/docs/General/NeSI_Policies/Total_HPC_Resources_Available.md @@ -129,7 +129,7 @@ width="98">

5,610,240

- +  Table 2: GPGPU resources available for Allocation per annum. Note: these are the maximum resources available (assuming all GPGPUs are used for @@ -201,3 +201,4 @@ Cuda Core-hours per annum

+  \ No newline at end of file diff --git a/docs/General/Release_Notes/About_the_Release_Notes_section.md b/docs/General/Release_Notes/About_the_Release_Notes_section.md index 40d357757..c55d13234 100644 --- a/docs/General/Release_Notes/About_the_Release_Notes_section.md +++ b/docs/General/Release_Notes/About_the_Release_Notes_section.md @@ -21,12 +21,12 @@ zendesk_section_id: 360000437436 NeSI publishes release notes for applications, 3rd party applications and NeSI services. This section will function as a directory to find all -published release note articles with the label 'releasenote' . +published release note articles with the label 'releasenote' .  ## NeSI applications You can find published release notes for NeSI applications in the -context of the structure of our documentation. +context of the structure of our documentation.  Product context > release notes section > versioned release note Example: [Release Notes Long-Term @@ -45,3 +45,4 @@ a reference to the vender release notes or documentation. Jupyter on NeSI is a recent example of a service composed of multiple components and dependencies that NeSI maintains. +  \ No newline at end of file diff --git a/docs/Getting_Started/Accessing_the_HPCs/Choosing_and_Configuring_Software_for_Connecting_to_the_Clusters.md b/docs/Getting_Started/Accessing_the_HPCs/Choosing_and_Configuring_Software_for_Connecting_to_the_Clusters.md index 3b0fd5682..bffcc44a5 100644 --- a/docs/Getting_Started/Accessing_the_HPCs/Choosing_and_Configuring_Software_for_Connecting_to_the_Clusters.md +++ b/docs/Getting_Started/Accessing_the_HPCs/Choosing_and_Configuring_Software_for_Connecting_to_the_Clusters.md @@ -24,12 +24,12 @@ zendesk_section_id: 360000034315 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Requirements -- Have an [active account and -project](https://support.nesi.org.nz/hc/en-gb/sections/360000196195-Accounts-Projects). -- Set up your [NeSI Account -Password](https://support.nesi.org.nz/hc/en-gb/articles/360000335995). -- Set up [Two-Factor -Authentication](https://support.nesi.org.nz/hc/en-gb/articles/360000203075). + - Have an [active account and + project](https://support.nesi.org.nz/hc/en-gb/sections/360000196195-Accounts-Projects). + - Set up your [NeSI Account + Password](https://support.nesi.org.nz/hc/en-gb/articles/360000335995). + - Set up [Two-Factor + Authentication](https://support.nesi.org.nz/hc/en-gb/articles/360000203075). Before you can start submitting work you will need some way of connecting to the NeSI clusters. @@ -44,25 +44,25 @@ operating system and level of experience. - ## JupyterHub -JupyterHub is a service providing access to Jupyter Notebooks on -NeSI. A terminal similar to the other setups describe below can be -accessed through the Jupyter Launcher. + JupyterHub is a service providing access to Jupyter Notebooks on + NeSI. A terminal similar to the other setups describe below can be + accessed through the Jupyter Launcher. !!! prerequisite What next? -- More info on [Jupyter -Terminal](https://support.nesi.org.nz/hc/en-gb/articles/360001555615#jupyter-term) -- Visit [jupyter.nesi.org.nz](https://jupyter.nesi.org.nz/hub/). + - More info on [Jupyter + Terminal](https://support.nesi.org.nz/hc/en-gb/articles/360001555615#jupyter-term) + - Visit [jupyter.nesi.org.nz](https://jupyter.nesi.org.nz/hub/). ## Linux or Mac OS - ## Terminal -On MacOS or Linux you will already have a terminal emulator -installed, usually called, "Terminal." To find it, simply search for -"terminal". -Congratulations! You are ready to move to the next step. + On MacOS or Linux you will already have a terminal emulator + installed, usually called, "Terminal." To find it, simply search for + "terminal". + Congratulations! You are ready to move to the next step. !!! prerequisite What next? -- Setting up your [Default -Terminal](https://support.nesi.org.nz/hc/en-gb/articles/360000625535) + - Setting up your [Default + Terminal](https://support.nesi.org.nz/hc/en-gb/articles/360000625535) ## Windows @@ -72,92 +72,92 @@ different options, listed in order of preference. - ## Ubuntu Terminal (Windows 10) !!! prerequisite Note -The Ubuntu Terminal and Windows Subsystem for Linux require -administrative privileges to enable and install them. If your -institution has not given you such privileges, consider using -another option such as MobaXTerm Portable Edition (see below). - -This is the most functional replication of a Unix terminal available -on Windows, and allows users to follow the same set of instructions -given to Mac/Linux users. It may be necessary to enable Windows -Subsystem for Linux (WSL) first. + The Ubuntu Terminal and Windows Subsystem for Linux require + administrative privileges to enable and install them. If your + institution has not given you such privileges, consider using + another option such as MobaXTerm Portable Edition (see below). + + This is the most functional replication of a Unix terminal available + on Windows, and allows users to follow the same set of instructions + given to Mac/Linux users. It may be necessary to enable Windows + Subsystem for Linux (WSL) first. !!! prerequisite What next? -- Enabling -[WSL](https://support.nesi.org.nz/hc/en-gb/articles/360001075575) -- Setting up the [Ubuntu -Terminal](https://support.nesi.org.nz/hc/en-gb/articles/360001050575) -- Setting up -[X-Forwarding](https://support.nesi.org.nz/hc/en-gb/articles/4407442866703) + - Enabling + [WSL](https://support.nesi.org.nz/hc/en-gb/articles/360001075575) + - Setting up the [Ubuntu + Terminal](https://support.nesi.org.nz/hc/en-gb/articles/360001050575) + - Setting up + [X-Forwarding](https://support.nesi.org.nz/hc/en-gb/articles/4407442866703) - ## MobaXterm -In addition to being a terminal emulator, MobaXterm also includes -several useful features like multiplexing, X11 forwarding and a file -transfer GUI. - -MobaXterm can be downloaded from -[here](https://mobaxterm.mobatek.net/download-home-edition.html). -The portable edition will allow you to use MobaXterm without needing -administrator privileges, however it introduces several bugs so we -*highly* recommend using the installer edition if you have -administrator privileges on your workstation or if your -institution's IT team supports MobaXTerm. + In addition to being a terminal emulator, MobaXterm also includes + several useful features like multiplexing, X11 forwarding and a file + transfer GUI. + + MobaXterm can be downloaded from + [here](https://mobaxterm.mobatek.net/download-home-edition.html). + The portable edition will allow you to use MobaXterm without needing + administrator privileges, however it introduces several bugs so we + *highly* recommend using the installer edition if you have + administrator privileges on your workstation or if your + institution's IT team supports MobaXTerm. !!! prerequisite What next? -- Setting up -[MobaXterm](https://support.nesi.org.nz/hc/en-gb/articles/360000624696) + - Setting up + [MobaXterm](https://support.nesi.org.nz/hc/en-gb/articles/360000624696) - ## Using a Virtual Machine -In order to avoid the problems of using a Windows environment, it -may be advisable to install a Linux Virtual machine. This may be -advantageous in other ways as many elements of scientific computing -require a Linux environment, also it can provide a more user -friendly place to become familiar with command line use. + In order to avoid the problems of using a Windows environment, it + may be advisable to install a Linux Virtual machine. This may be + advantageous in other ways as many elements of scientific computing + require a Linux environment, also it can provide a more user + friendly place to become familiar with command line use. -There are multiple free options when it comes to VM software. We -recommend [Oracle -VirtualBox](https://www.virtualbox.org/wiki/Downloads). + There are multiple free options when it comes to VM software. We + recommend [Oracle + VirtualBox](https://www.virtualbox.org/wiki/Downloads). -Further instructions on how to set up a virtual machine can be found -[here](https://blog.storagecraft.com/the-dead-simple-guide-to-installing-a-linux-virtual-machine-on-windows/). + Further instructions on how to set up a virtual machine can be found + [here](https://blog.storagecraft.com/the-dead-simple-guide-to-installing-a-linux-virtual-machine-on-windows/). -Once you have a working VM you may continue following the -instructions as given for -[Linux/MacOS](#h_c1bbd761-1133-499b-a61a-57b9c4320a1a). + Once you have a working VM you may continue following the + instructions as given for + [Linux/MacOS](#h_c1bbd761-1133-499b-a61a-57b9c4320a1a). !!! prerequisite What next? -- Setting up a [Virtual -Machine](https://blog.storagecraft.com/the-dead-simple-guide-to-installing-a-linux-virtual-machine-on-windows/) + - Setting up a [Virtual + Machine](https://blog.storagecraft.com/the-dead-simple-guide-to-installing-a-linux-virtual-machine-on-windows/) - ## WinSCP -WinSCP has some advantages over MobaXterm (customisable, cleaner -interface, open source), and some disadvantages (no built in -X-server, additional authentication step). However, WinSCP setup is -more involved than with MobaXterm, therefore we do not recommend it -for new users. + WinSCP has some advantages over MobaXterm (customisable, cleaner + interface, open source), and some disadvantages (no built in + X-server, additional authentication step). However, WinSCP setup is + more involved than with MobaXterm, therefore we do not recommend it + for new users. !!! prerequisite What next? -- Setting up -[WinSCP](https://support.nesi.org.nz/hc/en-gb/articles/360000584256) + - Setting up + [WinSCP](https://support.nesi.org.nz/hc/en-gb/articles/360000584256) - ## Git Bash -If you are using Git for version control you may already have Git -Bash installed. If not it can be downloaded -from [here](https://git-scm.com/downloads). + If you are using Git for version control you may already have Git + Bash installed. If not it can be downloaded + from [here](https://git-scm.com/downloads). -Git Bash is perfectly adequate for testing your login or setting up -your password, but lacks many of the features of MobaXterm or a -native Unix-Like terminal. Therefore we do not recommend it as your -primary terminal. + Git Bash is perfectly adequate for testing your login or setting up + your password, but lacks many of the features of MobaXterm or a + native Unix-Like terminal. Therefore we do not recommend it as your + primary terminal. - ## Windows PowerShell -All Windows computers have PowerShell installed, however it will -only be useful to you if Windows Subsystem for Linux (WSL) is also -enabled, instructions -[here](https://support.nesi.org.nz/hc/en-gb/articles/360001075575). + All Windows computers have PowerShell installed, however it will + only be useful to you if Windows Subsystem for Linux (WSL) is also + enabled, instructions + [here](https://support.nesi.org.nz/hc/en-gb/articles/360001075575). -Like Git Bash, PowerShell is perfectly adequate for testing your -login or setting up your password, but lacks many of the features of -MobaXterm or a native Unix-Like terminal. Therefore we do not -recommend it as your primary terminal. \ No newline at end of file + Like Git Bash, PowerShell is perfectly adequate for testing your + login or setting up your password, but lacks many of the features of + MobaXterm or a native Unix-Like terminal. Therefore we do not + recommend it as your primary terminal. \ No newline at end of file diff --git a/docs/Getting_Started/Accessing_the_HPCs/Port_Forwarding.md b/docs/Getting_Started/Accessing_the_HPCs/Port_Forwarding.md index e9eee1928..712d4ba88 100644 --- a/docs/Getting_Started/Accessing_the_HPCs/Port_Forwarding.md +++ b/docs/Getting_Started/Accessing_the_HPCs/Port_Forwarding.md @@ -20,9 +20,9 @@ zendesk_section_id: 360000034315 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Requirements -- Have your [connection to the NeSI -cluster](https://support.nesi.org.nz/hc/en-gb/articles/360000625535-Standard-Terminal-Setup) -configured. + - Have your [connection to the NeSI + cluster](https://support.nesi.org.nz/hc/en-gb/articles/360000625535-Standard-Terminal-Setup) + configured. Some applications only accept connections from internal ports (i.e a port on the same local network), if you are running one such application @@ -36,7 +36,7 @@ Three values must be known, the *local port*, the *host alias*, and the **Localhost: **The self address of a host (computer), equivalent to `127.0.0.1`. The alias `localhost` can also be used in most cases. -**Local Port:** The port number you will use on your local machine. +**Local Port:** The port number you will use on your local machine.  **Host Alias:** An alias for the socket of your main connection to the cluster, `mahuika` or `maui` if you have set up your ssh config file as @@ -46,10 +46,10 @@ described **Remote Port:** The port number you will use on the remote machine (in this case the NeSI cluster) !!! prerequisite Note -The following examples use aliases as set up in [standard terminal -setup](https://support.nesi.org.nz/hc/en-gb/articles/360000625535). -This allows the forwarding from your local machine to the NeSI -cluster, without having to re-tunnel through the lander node. + The following examples use aliases as set up in [standard terminal + setup](https://support.nesi.org.nz/hc/en-gb/articles/360000625535). + This allows the forwarding from your local machine to the NeSI + cluster, without having to re-tunnel through the lander node. ## Command line (OpenSSH) @@ -69,7 +69,7 @@ I want to connect to a server running on mahuika that is listening on port 6666. In a new terminal on my local machine I enter the command: ``` sl -ssh -L 5555:localhost:6666 mahuika +ssh -L 5555:localhost:6666 mahuika  ``` Your terminal will now function like a normal connection to mahuika. @@ -77,10 +77,10 @@ However if you close this terminal session the port forwarding will end. If there is no existing session on mahuika, you will be prompted for your first and second factor, same as during the regular log in -procedure. +procedure.  !!! prerequisite Note -Your local port and remote port do not have to be different numbers. -It is generally easier to use the same number for both. + Your local port and remote port do not have to be different numbers. + It is generally easier to use the same number for both. ## SSH Config (OpenSSH) @@ -98,21 +98,21 @@ ExitOnForwardFailure yes ``` ExitOnForwardFailure is optional, but it is useful to kill the session -if the port fails. +if the port fails.  e.g. ``` sl -Host mahuika -User cwal219 -Hostname login.mahuika.nesi.org.nz -ProxyCommand ssh -W %h:%p lander -ForwardX11 yes -ForwardX11Trusted yes -ServerAliveInterval 300 -ServerAliveCountMax 2 -LocalForward 6676 mahuika:6676 -ExitOnForwardFailure yes + Host mahuika + User cwal219 + Hostname login.mahuika.nesi.org.nz + ProxyCommand ssh -W %h:%p lander + ForwardX11 yes + ForwardX11Trusted yes + ServerAliveInterval 300 + ServerAliveCountMax 2 + LocalForward 6676 mahuika:6676 + ExitOnForwardFailure yes ``` In the above example, the local and remote ports are the same. This @@ -121,15 +121,15 @@ isn't a requirement, but it makes things easier to remember. Now so long as you have a connection to the cluster, your chosen port will be forwarded. !!! prerequisite Note -- If you get a error message -``` sl -bind: No such file or directory -unix_listener: cannot bind to path: -``` -try to create the following directory: -``` sl -mkdir -P ~/.ssh/sockets -``` + - If you get a error message + ``` sl + bind: No such file or directory + unix_listener: cannot bind to path: + ``` + try to create the following directory: + ``` sl + mkdir -P ~/.ssh/sockets + ``` ## MobaXterm @@ -137,7 +137,7 @@ If you have Windows Subsystem for Linux installed, you can use the method described above. This is the recommended method. You can tell if MobaXterm is using WSL as it will appear in the banner -when starting a new terminal session. +when starting a new terminal session.  ![mceclip0.png](../../assets/images/Port_Forwarding.png) @@ -154,14 +154,14 @@ The two tunnels should look like this. ![mobakey.png](../../assets/images/Port_Forwarding_1.png) -■ local port -■ remote port -■ must match +■ local port +■ remote port +■ must match ■ doesn't matter +  - -## sshuttle +## sshuttle  [sshuttle](https://sshuttle.readthedocs.io/en/stable/) is a transparent proxy implementing VPN like traffic forwarding. It is based on Linux or @@ -196,7 +196,7 @@ Ports can also be forwarded from the login node to a compute node. The best way to do this is by creating a reverse tunnel **from your slurm job** (that way the tunnel doesn't depend on a separate shell, and -the tunnel will not outlive the job). +the tunnel will not outlive the job).  The syntax for opening a reverse tunnel is similar the regular tunnel command, `-N` to not execute a command after connecting, `-f` to run the @@ -220,8 +220,8 @@ ssh -Nf -R 6676:localhost:6676 ${SLURM_SUBMIT_HOST} ``` !!! prerequisite What Next? -- Using -[JupyterLab ](https://support.nesi.org.nz/hc/en-gb/articles/360001093315)on -the cluster. -- [NiceDCV ](https://support.nesi.org.nz/hc/en-gb/articles/360000719156) -- [Paraview](https://support.nesi.org.nz/hc/en-gb/articles/360001002956-ParaView) \ No newline at end of file + - Using + [JupyterLab ](https://support.nesi.org.nz/hc/en-gb/articles/360001093315)on + the cluster. + - [NiceDCV ](https://support.nesi.org.nz/hc/en-gb/articles/360000719156) + - [Paraview](https://support.nesi.org.nz/hc/en-gb/articles/360001002956-ParaView) \ No newline at end of file diff --git a/docs/Getting_Started/Accessing_the_HPCs/Setting_Up_Two_Factor_Authentication.md b/docs/Getting_Started/Accessing_the_HPCs/Setting_Up_Two_Factor_Authentication.md index c2eb78872..b47d05963 100644 --- a/docs/Getting_Started/Accessing_the_HPCs/Setting_Up_Two_Factor_Authentication.md +++ b/docs/Getting_Started/Accessing_the_HPCs/Setting_Up_Two_Factor_Authentication.md @@ -24,14 +24,14 @@ zendesk_section_id: 360000034315 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Requirements -You must: -- Have a [NeSI -account](https://support.nesi.org.nz/hc/en-gb/articles/360000159715). -- Be a member of an [active -project](https://support.nesi.org.nz/hc/en-gb/sections/360000196195-Accounts-Projects). -- Have [set up your NeSI account -password](https://support.nesi.org.nz/hc/en-gb/articles/360000335995-Setting-Up-and-Resetting-Your-Password). -- Have a device with an authentication app. + You must: + - Have a [NeSI + account](https://support.nesi.org.nz/hc/en-gb/articles/360000159715). + - Be a member of an [active + project](https://support.nesi.org.nz/hc/en-gb/sections/360000196195-Accounts-Projects). + - Have [set up your NeSI account + password](https://support.nesi.org.nz/hc/en-gb/articles/360000335995-Setting-Up-and-Resetting-Your-Password). + - Have a device with an authentication app. ##  Authentication App @@ -43,34 +43,34 @@ apps which work through the browser like Authy). If you some reason you can't do this, please contact NeSI support. - +  ## Linking a device to your account 1. Log in to [My NeSI](https://my.nesi.org.nz) via your browser. 2. Click **My HPC Account** on left hand panel  and then **Setup -Two-Factor Authentication device** + Two-Factor Authentication device** -![authentication\_factor\_setup.png](../../assets/images/Setting_Up_Two_Factor_Authentication.png) + ![authentication\_factor\_setup.png](../../assets/images/Setting_Up_Two_Factor_Authentication.png) -3. Click the "**Setup Two-Factor Authentication device**" link. -![](../../assets/images/Setting_Up_Two_Factor_Authentication_0.png) +3. Click the "**Setup Two-Factor Authentication device**" link. + ![](../../assets/images/Setting_Up_Two_Factor_Authentication_0.png) 4. After clicking on "Continue" you will retrieve the QR code. 5. Open your Authy or Google Authenticator app and click on the add -button and select "**Scan a barcode**". Alternatively, if you are -not able to scan the barcode from your device you can manually enter -the provided authentication code into your authentication app. + button and select "**Scan a barcode**". Alternatively, if you are + not able to scan the barcode from your device you can manually enter + the provided authentication code into your authentication app. ## The second-factor token The 6 digit code displayed on your app can now be used as the second -factor in the authentication process. +factor in the authentication process. This code rotates every 30 seconds, and it **can only be used once**. This means that you can only try logging in to the lander node once every 30 seconds. !!! prerequisite What next? -- [Getting access to the -cluster](https://support.nesi.org.nz/hc/en-gb/articles/360001016335) \ No newline at end of file + - [Getting access to the + cluster](https://support.nesi.org.nz/hc/en-gb/articles/360001016335) \ No newline at end of file diff --git a/docs/Getting_Started/Accessing_the_HPCs/Setting_Up_and_Resetting_Your_Password.md b/docs/Getting_Started/Accessing_the_HPCs/Setting_Up_and_Resetting_Your_Password.md index 56b2efa61..ffab8df69 100644 --- a/docs/Getting_Started/Accessing_the_HPCs/Setting_Up_and_Resetting_Your_Password.md +++ b/docs/Getting_Started/Accessing_the_HPCs/Setting_Up_and_Resetting_Your_Password.md @@ -23,75 +23,76 @@ zendesk_section_id: 360000034315 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Requirements -- Have a [NeSI -account](https://support.nesi.org.nz/hc/en-gb/articles/360000159715). -- Be a member of an [active -project.](https://support.nesi.org.nz/hc/en-gb/sections/360000196195-Accounts-Projects) + - Have a [NeSI + account](https://support.nesi.org.nz/hc/en-gb/articles/360000159715). + - Be a member of an [active + project.](https://support.nesi.org.nz/hc/en-gb/sections/360000196195-Accounts-Projects) - - [Setting NeSI Password via my NeSI -portal](#h_d7de94ee-b517-41dd-b70e-6fca380b38a6) -- [Resetting NeSI Password via my NeSI -portal](#h_01G15PT2EM836JXJK202V52QZP) + portal](#h_d7de94ee-b517-41dd-b70e-6fca380b38a6) + - [Resetting NeSI Password via my NeSI + portal](#h_01G15PT2EM836JXJK202V52QZP) ## **Setting NeSI Password** - +  1. Log into the [my NeSI portal](https://my.nesi.org.nz) via your -browser. - + browser. + 2. Click **My HPC Account** on left hand panel and then **Set -Password** (If you are resetting your password this will read -**Reset Password**). -Note your** Username. -![authentication\_factor\_setup.png](../../assets/images/Setting_Up_and_Resetting_Your_Password.png) -** + Password** (If you are resetting your password this will read + **Reset Password**). + Note your** Username. + ![authentication\_factor\_setup.png](../../assets/images/Setting_Up_and_Resetting_Your_Password.png) + ** 3. Enter and verify your new password, making sure it follows the -[password -policy](https://support.nesi.org.nz/hc/en-gb/articles/360000336015). - + [password + policy](https://support.nesi.org.nz/hc/en-gb/articles/360000336015). + -### ![SetNeSIaccountPassword.png](../../assets/images/Setting_Up_and_Resetting_Your_Password_0.png) + #### ![SetNeSIaccountPassword.png](../../assets/images/Setting_Up_and_Resetting_Your_Password_0.png) 4. If the password set was successful, following confirmation label -will appear on the same page within few seconds - -![change\_success.png](../../assets/images/Setting_Up_and_Resetting_Your_Password_1.png) + will appear on the same page within few seconds +   + ![change\_success.png](../../assets/images/Setting_Up_and_Resetting_Your_Password_1.png) 5. Followed by an email confirmation similar to below ![password\_set\_confirmation.png](../../assets/images/Setting_Up_and_Resetting_Your_Password_2.png) +  - - +  ## **Resetting NeSI Password via my NeSI portal** 1. Log into the [my NeSI portal](https://my.nesi.org.nz) via your -browser. - + browser. + 2. Click **My HPC Account** on left hand panel and then **Reset -Password** -Note your**** Username. - -**** ** -** + Password** + Note your**** Username. + + **** ** + ** 3.  You can either enter the Old Password first and then set a new one -OR feel free to select **Forgot my password ** -- - We recommend **Forgot my password **option in general - + OR feel free to select **Forgot my password ** + - - We recommend **Forgot my password **option in general  + 4. If the password **reset** was successful, following confirmation -label will appear on the same page within few seconds -1. - ![change\_success.png](../../assets/images/Setting_Up_and_Resetting_Your_Password_3.png) -5. Followed by an email confirmation similar to below - - + label will appear on the same page within few seconds + 1. - ![change\_success.png](../../assets/images/Setting_Up_and_Resetting_Your_Password_3.png) +5. Followed by an email confirmation similar to below +   + ![password\_set\_confirmation.png](../../assets/images/Setting_Up_and_Resetting_Your_Password_4.png) !!! prerequisite What next? -- Set up [Second Factor -Authentication.](https://support.nesi.org.nz/hc/en-gb/articles/360000203075-Setting-Up-Two-Factor-Authentication) + - Set up [Second Factor + Authentication.](https://support.nesi.org.nz/hc/en-gb/articles/360000203075-Setting-Up-Two-Factor-Authentication) +  \ No newline at end of file diff --git a/docs/Getting_Started/Accessing_the_HPCs/X_Forwarding_using_the_Ubuntu_Terminal_on_Windows.md b/docs/Getting_Started/Accessing_the_HPCs/X_Forwarding_using_the_Ubuntu_Terminal_on_Windows.md index b027c4bcf..d4a9d60b4 100644 --- a/docs/Getting_Started/Accessing_the_HPCs/X_Forwarding_using_the_Ubuntu_Terminal_on_Windows.md +++ b/docs/Getting_Started/Accessing_the_HPCs/X_Forwarding_using_the_Ubuntu_Terminal_on_Windows.md @@ -23,14 +23,14 @@ zendesk_section_id: 360000034315 [//]: <> (REMOVE ME IF PAGE VALIDATED) 1. [Download and install Xming from -here](https://sourceforge.net/projects/xming/). Don't install an SSH -client when prompted during the installation, if you are prompted -for Firewall permissions after installing Xming close the window -without allowing any Firewall permissions. + here](https://sourceforge.net/projects/xming/). Don't install an SSH + client when prompted during the installation, if you are prompted + for Firewall permissions after installing Xming close the window + without allowing any Firewall permissions. 2. Open your Ubuntu terminal and install x11-apps with the command -`sudo apt install x11-apps -y`. + `sudo apt install x11-apps -y`. 3. Restart your terminal, start your Xming (there should be a desktop -icon after installing it). You should now be able to X-Forward -displays from the HPC when you log in (assuming you have completed -the [terminal setup instructions found -here](https://support.nesi.org.nz/hc/en-gb/articles/360000625535)). \ No newline at end of file + icon after installing it). You should now be able to X-Forward + displays from the HPC when you log in (assuming you have completed + the [terminal setup instructions found + here](https://support.nesi.org.nz/hc/en-gb/articles/360000625535)). \ No newline at end of file diff --git a/docs/Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_NeSI_project.md b/docs/Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_NeSI_project.md index 784c71e70..041564736 100644 --- a/docs/Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_NeSI_project.md +++ b/docs/Getting_Started/Accounts-Projects_and_Allocations/Applying_for_a_new_NeSI_project.md @@ -23,50 +23,50 @@ zendesk_section_id: 360000196195 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Requirements -Compulsory: -- Have a [NeSI Account -profile](https://support.nesi.org.nz/hc/en-gb/articles/360000159715). -- NIWA researchers only: read and follow the [NIWA internal -documentation for gaining access to the -HPCs](https://one.niwa.co.nz/display/ONE/High+Performance+Computing+Facility+Services) (this -link is only valid from within the NIWA network or VPN). -Preferred: -- Assemble your project team. -- Becoming familiar with the Linux command line. There are many -courses and online materials available, such as [Software -Carpentry](https://swcarpentry.github.io/shell-novice/), to help -you and your project team gain the necessary skills. -- Become familiar with foundational HPC skills, for example by -attending a NeSI introductory workshop, one of our [weekly -introductory sessions (or watching the -recording)](https://support.nesi.org.nz/hc/en-gb/articles/360000428676), -or having one or more of your project team members do so. -- Review our [allocation -classes](https://support.nesi.org.nz/hc/en-gb/articles/360000925176). -If you don't think you currently qualify for any class other than -Proposal Development, please [contact -us](https://support.nesi.org.nz/hc/requests/new) as soon as -possible to discuss your options. Your institution may be in a -position to buy a subscription from us while your Proposal -Development allocation is in effect if they do not already possess -one. + Compulsory: + - Have a [NeSI Account + profile](https://support.nesi.org.nz/hc/en-gb/articles/360000159715). + - NIWA researchers only: read and follow the [NIWA internal + documentation for gaining access to the + HPCs](https://one.niwa.co.nz/display/ONE/High+Performance+Computing+Facility+Services) (this + link is only valid from within the NIWA network or VPN). + Preferred: + - Assemble your project team. + - Becoming familiar with the Linux command line. There are many + courses and online materials available, such as [Software + Carpentry](https://swcarpentry.github.io/shell-novice/), to help + you and your project team gain the necessary skills. + - Become familiar with foundational HPC skills, for example by + attending a NeSI introductory workshop, one of our [weekly + introductory sessions (or watching the + recording)](https://support.nesi.org.nz/hc/en-gb/articles/360000428676), + or having one or more of your project team members do so. + - Review our [allocation + classes](https://support.nesi.org.nz/hc/en-gb/articles/360000925176). + If you don't think you currently qualify for any class other than + Proposal Development, please [contact + us](https://support.nesi.org.nz/hc/requests/new) as soon as + possible to discuss your options. Your institution may be in a + position to buy a subscription from us while your Proposal + Development allocation is in effect if they do not already possess + one. Requests to use NeSI resources are [submitted via a web form](https://my.nesi.org.nz/). The NeSI team will endeavour to approve your project, or contact you for more information, within 3-5 working days of your submitting your project request. !!! prerequisite Note -If you are a member of NIWA please also ensure that you have also read -and followed the [NIWA internal documentation for gaining access to -the -HPCs](https://one.niwa.co.nz/display/ONE/High+Performance+Computing+Facility+Services) -before applying for your NeSI project. *You will only be able to -access the link from behind the NIWA VPN.* -Other institutions may also put in place, or vary, pre-approval -processes from time to time. If you apply for a new project without -having completed any necessary pre-approval steps, your application -may be delayed more than usual, or we may notify you and ask you to -obtain pre-approval from your institution. + If you are a member of NIWA please also ensure that you have also read + and followed the [NIWA internal documentation for gaining access to + the + HPCs](https://one.niwa.co.nz/display/ONE/High+Performance+Computing+Facility+Services) + before applying for your NeSI project. *You will only be able to + access the link from behind the NIWA VPN.* + Other institutions may also put in place, or vary, pre-approval + processes from time to time. If you apply for a new project without + having completed any necessary pre-approval steps, your application + may be delayed more than usual, or we may notify you and ask you to + obtain pre-approval from your institution. ## Information you will need to provide @@ -74,31 +74,31 @@ During the application process, we will ask you for the following information: - Your name, institutional affiliation (i.e. your employer or place of -study), role at your institution, a contact telephone number, and -work email address + study), role at your institution, a contact telephone number, and + work email address - The title of your proposed NeSI HPC Project, and a brief abstract -describing your project's goals + describing your project's goals - The scientific domain and field of study (i.e. subdomain) your -project belongs to + project belongs to - The date on which you plan to start your computational work on NeSI -(not the start date of the research programme as a whole, or of the -research programme's current or expected funding) + (not the start date of the research programme as a whole, or of the + research programme's current or expected funding) - Details of how your project is funded (this will help determine -whether you are eligible for an allocation from our -[Merit](https://support.nesi.org.nz/hc/articles/360000175635) class) + whether you are eligible for an allocation from our + [Merit](https://support.nesi.org.nz/hc/articles/360000175635) class) - Your previous HPC experience - Whether you would like expert scientific programming support on your -project + project - Who else will be working on the proposed NeSI HPC Project with you - What software you intend to use on the [NeSI -HPCs](https://support.nesi.org.nz/hc/articles/360000175735). + HPCs](https://support.nesi.org.nz/hc/articles/360000175735). You will also be given an opportunity to tell us anything else you think is relevant. !!! prerequisite What Next? -- Your NeSI Project proposal will be -[reviewed](https://support.nesi.org.nz/hc/en-gb/articles/360000202136), -after which you will be informed of the outcome. -- We may contact you if further details are required. -- When your project is approved you will be able to [set your Linux -Password](https://support.nesi.org.nz/hc/en-gb/articles/360000335995). \ No newline at end of file + - Your NeSI Project proposal will be + [reviewed](https://support.nesi.org.nz/hc/en-gb/articles/360000202136), + after which you will be informed of the outcome. + - We may contact you if further details are required. + - When your project is approved you will be able to [set your Linux + Password](https://support.nesi.org.nz/hc/en-gb/articles/360000335995). \ No newline at end of file diff --git a/docs/Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_an_existing_NeSI_project.md b/docs/Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_an_existing_NeSI_project.md index e035a9d6d..a9d81c48b 100644 --- a/docs/Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_an_existing_NeSI_project.md +++ b/docs/Getting_Started/Accounts-Projects_and_Allocations/Applying_to_join_an_existing_NeSI_project.md @@ -24,21 +24,21 @@ zendesk_section_id: 360000196195 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Requirements -- You must have a [NeSI -account](https://support.nesi.org.nz/hc/en-gb/articles/360000159715). + - You must have a [NeSI + account](https://support.nesi.org.nz/hc/en-gb/articles/360000159715). ## How to join an existing project on NeSI 1. Make sure you have been given the project code by the project owner. 2. Log in to [my.nesi.org.nz](https://my.nesi.org.nz/). 3. Under the [Projects](https://my.nesi.org.nz/projects/join) page use -the "**Join Project**" link to request to be added to the project as -a member. + the "**Join Project**" link to request to be added to the project as + a member. Once submitted you will receive a ticket confirmation via email. !!! prerequisite What Next? -- The project owner will be notified, and asked to approve your -request. -- Once your request has been approved by the project owner and -processed by us, you will be able to [set your NeSI account -password](https://support.nesi.org.nz/hc/en-gb/articles/360000335995). \ No newline at end of file + - The project owner will be notified, and asked to approve your + request. + - Once your request has been approved by the project owner and + processed by us, you will be able to [set your NeSI account + password](https://support.nesi.org.nz/hc/en-gb/articles/360000335995). \ No newline at end of file diff --git a/docs/Getting_Started/Accounts-Projects_and_Allocations/Creating_a_NeSI_Account_Profile.md b/docs/Getting_Started/Accounts-Projects_and_Allocations/Creating_a_NeSI_Account_Profile.md index 8f9393220..9f4c3741c 100644 --- a/docs/Getting_Started/Accounts-Projects_and_Allocations/Creating_a_NeSI_Account_Profile.md +++ b/docs/Getting_Started/Accounts-Projects_and_Allocations/Creating_a_NeSI_Account_Profile.md @@ -24,23 +24,23 @@ zendesk_section_id: 360000196195 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Requirements -Either an active login at a Tuakiri member institution, or [a Tuakiri -Virtual Home account in respect of your current place of work or -study](https://support.nesi.org.nz/hc/en-gb/articles/360000216035). + Either an active login at a Tuakiri member institution, or [a Tuakiri + Virtual Home account in respect of your current place of work or + study](https://support.nesi.org.nz/hc/en-gb/articles/360000216035). 1. Access [my.nesi.org.nz](https://my.nesi.org.nz) via your browser and -log in with either your institutional credentials, or your Tuakiri -Virtual Home account, whichever applies. + log in with either your institutional credentials, or your Tuakiri + Virtual Home account, whichever applies. 2. If this is your first time logging in to my.nesi and you do not have -an entry in our database (you have not previously had a NeSI -account) you will be asked to fill out some fields, such as your -role at your institution and contact telephone number, and submit -the online form to us. We will complete your personal profile for -our records. + an entry in our database (you have not previously had a NeSI + account) you will be asked to fill out some fields, such as your + role at your institution and contact telephone number, and submit + the online form to us. We will complete your personal profile for + our records. !!! prerequisite What next? -- [Apply for -Access](https://support.nesi.org.nz/hc/en-gb/articles/360000174976), -either submit an application for a new project or [join an -existing -project](https://support.nesi.org.nz/hc/en-gb/articles/360000693896). \ No newline at end of file + - [Apply for + Access](https://support.nesi.org.nz/hc/en-gb/articles/360000174976), + either submit an application for a new project or [join an + existing + project](https://support.nesi.org.nz/hc/en-gb/articles/360000693896). \ No newline at end of file diff --git a/docs/Getting_Started/Accounts-Projects_and_Allocations/Project_Extensions_and_New_Allocations_on_Existing_Projects.md b/docs/Getting_Started/Accounts-Projects_and_Allocations/Project_Extensions_and_New_Allocations_on_Existing_Projects.md index dadb9ea58..6ccb90b13 100644 --- a/docs/Getting_Started/Accounts-Projects_and_Allocations/Project_Extensions_and_New_Allocations_on_Existing_Projects.md +++ b/docs/Getting_Started/Accounts-Projects_and_Allocations/Project_Extensions_and_New_Allocations_on_Existing_Projects.md @@ -27,7 +27,7 @@ for a new project to carry on the same work. We currently offer two sorts of extensions: - A new allocation of computing resources (usually compute units on -Mahuika or node hours on Māui) + Mahuika or node hours on Māui) - A project extension without a new allocation of computing resources. ## Will my project qualify for an extension? @@ -36,26 +36,26 @@ Usually, yes. There are a few circumstances in which a project will not qualify for an extension: - If there has been a substantial change to the research programme's -goals. + goals. - If there has been (or is about to be) a substantial change to the -computational methods the project team plans on using to carry out -the project work, such that we decide a new technical assessment is -warranted. + computational methods the project team plans on using to carry out + the project work, such that we decide a new technical assessment is + warranted. - If the project team is no longer eligible to receive computing -resources from NeSI. For example, the grant funding the research -programme has come to an end and the host institution does not wish -to purchase computing resources from NeSI (or allocate computing -resources from its subscription or collaborator share if it has -one). + resources from NeSI. For example, the grant funding the research + programme has come to an end and the host institution does not wish + to purchase computing resources from NeSI (or allocate computing + resources from its subscription or collaborator share if it has + one). - If the project's host institution has changed (or is about to -change). + change). - If the project's owner is no longer employed by or studying at the -project's host institution, and there is no-one at the host -institution who has agreed to take over project ownership -responsibilities. + project's host institution, and there is no-one at the host + institution who has agreed to take over project ownership + responsibilities. - If the project's owner (or supervisor if the project has one) is not -authorised to access NeSI facilities due to a refusal to accept or -failure to abide by the NeSI Acceptable Use Policy. + authorised to access NeSI facilities due to a refusal to accept or + failure to abide by the NeSI Acceptable Use Policy. ## Who may request a project extension? @@ -79,12 +79,12 @@ a new allocation (or, alternatively, clean up your project data) in the following circumstances: - In the lead-up to the end of the [call -window](https://www.nesi.org.nz/news/2018/04/new-application-process-merit-postgraduate-allocations) -immediately before your currently active allocation is scheduled to -end. + window](https://www.nesi.org.nz/news/2018/04/new-application-process-merit-postgraduate-allocations) + immediately before your currently active allocation is scheduled to + end. - In the lead-up to the end of your allocation. - If your allocation ends before your project is scheduled to end, in -the lead-up to the end of your project. + the lead-up to the end of your project. ## Requests for new allocations @@ -98,10 +98,10 @@ differ from our forecast. Please be aware that: - First and subsequent allocations are subject to the NeSI allocation -size and duration limits in force at the time they are considered by -our reviewers. + size and duration limits in force at the time they are considered by + our reviewers. - An allocation from an institution's entitlement is subject to -approval by that institution. + approval by that institution. ## Requests for project extensions without a new compute allocation @@ -121,3 +121,4 @@ Service](https://support.nesi.org.nz/hc/en-gb/articles/360001169956) or to move your research data off our facility and make arrangements with your project's host institution for long-term data storage. +  \ No newline at end of file diff --git a/docs/Getting_Started/Accounts-Projects_and_Allocations/Quarterly_allocation_periods.md b/docs/Getting_Started/Accounts-Projects_and_Allocations/Quarterly_allocation_periods.md index 0404bdc69..6610db7e8 100644 --- a/docs/Getting_Started/Accounts-Projects_and_Allocations/Quarterly_allocation_periods.md +++ b/docs/Getting_Started/Accounts-Projects_and_Allocations/Quarterly_allocation_periods.md @@ -53,16 +53,16 @@ expires. For example, if your allocation expires at the end of May, you will receive email reminders during the month of April. We aggregate requests and deal with them in batches during the review -month. +month.  - If you apply for your new allocation early (for example, you apply -in February when your allocation isn’t due to end until the end of -May), we will hold your request until April. + in February when your allocation isn’t due to end until the end of + May), we will hold your request until April. - If you apply for your new allocation late, your request may be -deprioritised by your institution, or you may suffer an interruption -to service as we have to consider your request separately and later. -It is possible, depending on overall demand, that you may have to -wait for the following call before your request is considered. + deprioritised by your institution, or you may suffer an interruption + to service as we have to consider your request separately and later. + It is possible, depending on overall demand, that you may have to + wait for the following call before your request is considered. If you have questions about the review cycles or other steps involved with getting access to NeSI, contact . \ No newline at end of file diff --git a/docs/Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md b/docs/Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md index 3aacaccf3..7043c3b34 100644 --- a/docs/Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md +++ b/docs/Getting_Started/Accounts-Projects_and_Allocations/What_is_an_allocation.md @@ -83,11 +83,11 @@ is two details). Therefore: - the lowest possible price for a CPU-only job is 0.70 compute units -per hour, plus memory (RAM). + per hour, plus memory (RAM). - the lowest possible price for a CPU + P100 GPU job is 7.70 compute -units per hour, plus memory (RAM). + units per hour, plus memory (RAM). - the lowest possible price for a CPU + A100 GPU job is 18.70 compute -units per hour, plus memory (RAM). + units per hour, plus memory (RAM). In reality, every job must request at least some RAM. @@ -140,3 +140,4 @@ scientific programming expertise to your project. We do not yet have a ratio of consultancy hours to Mahuika compute units. +  \ No newline at end of file diff --git a/docs/Getting_Started/Cheat_Sheets/Git-Reference_Sheet.md b/docs/Getting_Started/Cheat_Sheets/Git-Reference_Sheet.md index 6db260db7..5d6b56934 100644 --- a/docs/Getting_Started/Cheat_Sheets/Git-Reference_Sheet.md +++ b/docs/Getting_Started/Cheat_Sheets/Git-Reference_Sheet.md @@ -37,70 +37,70 @@ found [here](https://git-scm.com/docs/git), or using `man git`. In order to pull from a private repo, or push changes to a remote, you need to authenticate yourself on the cluster. !!! prerequisite Password authentication -GitHub removed support for password authentication on August 13, 2021. -Using a SSH key is now the easiest way to set up authentication. + GitHub removed support for password authentication on August 13, 2021. + Using a SSH key is now the easiest way to set up authentication. ### SSH Authentication (GitHub) More information can be found in the [GitHub documentation](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent). -- On the NeSI cluster, run the command +- On the NeSI cluster, run the command  -``` sl -ssh-keygen -t ed25519 -C "your_github_account@example.com" -``` + ``` sl + ssh-keygen -t ed25519 -C "your_github_account@example.com" + ``` - When prompted for a file name, press `enter`. When prompted for a -passcode, press enter twice more. + passcode, press enter twice more. -- Open up the newly created .pub key with the command +- Open up the newly created .pub key with the command  -``` sl -cat ~/.ssh/id_ed25519.pub -``` + ``` sl + cat ~/.ssh/id_ed25519.pub + ``` -(or whatever you named the key). It should look something like: + (or whatever you named the key). It should look something like:  -``` sl -ssh-ed25519 ABCDEFGKSAfjksjafkjsaLJfakjJF your_github_account@example.com -``` + ``` sl + ssh-ed25519 ABCDEFGKSAfjksjafkjsaLJfakjJF your_github_account@example.com + ``` -Copy the whole key. + Copy the whole key. - Now log in to your github account. In the upper-right corner of any -page, click your profile photo click **Settings**. + page, click your profile photo click **Settings**. -![Settings icon in the user -bar](../../assets/images/Git-Reference_Sheet.png) + ![Settings icon in the user + bar](../../assets/images/Git-Reference_Sheet.png) - In the "Access" section of the sidebar, click **SSH and GPG keys**. - Click **New SSH key** or **Add SSH key**. -![SSH Key button](../../assets/images/Git-Reference_Sheet_0.png) + ![SSH Key button](../../assets/images/Git-Reference_Sheet_0.png) - In the "Title" field, put "Mahuika" or "NeSI". - Paste your key into the "Key" field. -![The key field](../../assets/images/Git-Reference_Sheet_1.png) + ![The key field](../../assets/images/Git-Reference_Sheet_1.png) - Click **Add SSH key**. - Switching back to your terminal on the cluster, you can test your -connection with the command + connection with the command  -``` sl -ssh -T git@github.com -``` + ``` sl + ssh -T git@github.com + ``` -You may be promted to authenticate, if so type 'yes' -If everything is working, you should see the message + You may be promted to authenticate, if so type 'yes' + If everything is working, you should see the message  -``` sl -Hi User! You've successfully authenticated, but GitHub does not provide shell access. -``` + ``` sl + Hi User! You've successfully authenticated, but GitHub does not provide shell access. + ``` ## Basics @@ -111,7 +111,7 @@ You can create a repository with either of the following commands. | clone | `git clone https://github.com/nesi/perf-training.git` | Copies a remote repository into your current directory. | | init | `git init` | Creates a new empty repo in your current directory. | - +  | | | | |---------|----------------------------------|--------------------------------------------------------------------------------------------------------------------------| @@ -141,11 +141,11 @@ will be the repo you cloned from, or set manually using | push  | `git push` | Incorporates changes from local repo into 'origin'.  | | | `git push ` | Incorporates changes from local repo into `` `` | !!! prerequisite Tip -If you are working without collaborators, there should be no reason to -have a conflict between your local and your remote repo. Make sure you -always git pull when starting work on your local and git push when -finished, this will save you wasting time resolving unnecessary -merges. + If you are working without collaborators, there should be no reason to + have a conflict between your local and your remote repo. Make sure you + always git pull when starting work on your local and git push when + finished, this will save you wasting time resolving unnecessary + merges. ## Branches @@ -159,4 +159,4 @@ multiple branches, or requires merging. | checkout | `git checkout ` | Switch to editing branch `` | | merge | `git merge ` | Merge `` into current branch. | !!! prerequisite Other Resources -- \ No newline at end of file + - \ No newline at end of file diff --git a/docs/Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md b/docs/Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md index a63f70299..4009f2e36 100644 --- a/docs/Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md +++ b/docs/Getting_Started/Cheat_Sheets/Slurm-Reference_Sheet.md @@ -44,11 +44,11 @@ slurm into a terminal | sinfo | `sinfo` | Shows the current state of our Slurm partitions. | |   |   |   | - +  ------------------------------------------------------------------------ - +  ## *sbatch* options @@ -62,28 +62,28 @@ an '=' sign e.g. `#SBATCH --account=nesi99999` or a space e.g. ### General options ------------------------ ---------------------------------------- --------------------------------------------------------------------------------------------------- ---job-name `#SBATCH --job-name=MyJob` The name that will appear when using squeue or sacct + ----------------------- ---------------------------------------- --------------------------------------------------------------------------------------------------- + --job-name `#SBATCH --job-name=MyJob` The name that will appear when using squeue or sacct ---account `#SBATCH --account=nesi99999` The account your core hours will be 'charged' to. + --account `#SBATCH --account=nesi99999` The account your core hours will be 'charged' to. ---time `#SBATCH --time=DD-HH:MM:SS` Job max walltime + --time `#SBATCH --time=DD-HH:MM:SS` Job max walltime ---mem `#SBATCH --mem=512MB` Memory required per node. + --mem `#SBATCH --mem=512MB` Memory required per node. ---partition `#SBATCH --partition=long` Specified job -[partition](https://support.nesi.org.nz/hc/en-gb/articles/360000204076-Mahuika-Slurm-Partitions). + --partition `#SBATCH --partition=long` Specified job + [partition](https://support.nesi.org.nz/hc/en-gb/articles/360000204076-Mahuika-Slurm-Partitions). ---output `#SBATCH --output=%j_output.out` Path and name of standard output file. + --output `#SBATCH --output=%j_output.out` Path and name of standard output file. ---mail-user `#SBATCH --mail-user=bob123@gmail.com` Address to send mail notifications. + --mail-user `#SBATCH --mail-user=bob123@gmail.com` Address to send mail notifications. ---mail-type `#SBATCH --mail-type=ALL` Will send a mail notification at `BEGIN END FAIL` + --mail-type `#SBATCH --mail-type=ALL` Will send a mail notification at `BEGIN END FAIL` -`#SBATCH --mail-type=TIME_LIMIT_80` Will send message at *80%* walltime + `#SBATCH --mail-type=TIME_LIMIT_80` Will send message at *80%* walltime ---no-requeue `#SBATCH --no-requeue` Will stop job being requeued in the case of node failure. ------------------------ ---------------------------------------- --------------------------------------------------------------------------------------------------- + --no-requeue `#SBATCH --no-requeue` Will stop job being requeued in the case of node failure. + ----------------------- ---------------------------------------- --------------------------------------------------------------------------------------------------- ### Parallel options @@ -208,11 +208,11 @@ defined. !!! prerequisite Tip -Many options have a short and long form e.g. -`#SBATCH --job-name=MyJob` & `#SBATCH -J=MyJob`. -``` sl -echo "Completed task ${SLURM_ARRAY_TASK_ID} / ${SLURM_ARRAY_TASK_COUNT} successfully" -``` + Many options have a short and long form e.g. + `#SBATCH --job-name=MyJob` & `#SBATCH -J=MyJob`. + ``` sl + echo "Completed task ${SLURM_ARRAY_TASK_ID} / ${SLURM_ARRAY_TASK_COUNT} successfully" + ``` ## Tokens @@ -232,9 +232,9 @@ Common examples. | `$SLURM_NTASKS` | Useful as an input for MPI functions. | | `$SLURM_SUBMIT_DIR` | Directory where `sbatch` was called. | !!! prerequisite Tip -In order to decrease the chance of a variable being misinterpreted you -should use the syntax `${NAME_OF_VARIABLE}` and define in strings if -possible. e.g. -``` sl -echo "Completed task ${SLURM_ARRAY_TASK_ID} / ${SLURM_ARRAY_TASK_COUNT} successfully" -``` \ No newline at end of file + In order to decrease the chance of a variable being misinterpreted you + should use the syntax `${NAME_OF_VARIABLE}` and define in strings if + possible. e.g. + ``` sl + echo "Completed task ${SLURM_ARRAY_TASK_ID} / ${SLURM_ARRAY_TASK_COUNT} successfully" + ``` \ No newline at end of file diff --git a/docs/Getting_Started/Cheat_Sheets/Unix_Shell-Reference_Sheet.md b/docs/Getting_Started/Cheat_Sheets/Unix_Shell-Reference_Sheet.md index 99c10f4b3..aa60a60d5 100644 --- a/docs/Getting_Started/Cheat_Sheets/Unix_Shell-Reference_Sheet.md +++ b/docs/Getting_Started/Cheat_Sheets/Unix_Shell-Reference_Sheet.md @@ -25,7 +25,7 @@ machines. If you do not have any experiencing using Unix Shell we would advise going at least the first (3 parts) of the [Software Carpentry Unix Shell lessons](http://swcarpentry.github.io/shell-novice/). - +  | | | | |-------------|----------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| @@ -53,11 +53,11 @@ Unix Shell lessons](http://swcarpentry.github.io/shell-novice/). | mkdir | mkdir directory | Create a directory of the specified name. | | man | man ls | Bring up the manual of a command (in this case ls). | !!! prerequisite Tip -Pressing the 'tab' key once will automatically complete the line if it -is the only option. e.g. -![complete1.gif](../../assets/images/Unix_Shell-Reference_Sheet.gif) -If there are more than one possible completions, pressing tab again -will show all those options. -![complete2.gif](../../assets/images/Unix_Shell-Reference_Sheet_0.gif) -Use of the tab key can help navigate the filesystem, spellcheck your -commands and save you time typing. \ No newline at end of file + Pressing the 'tab' key once will automatically complete the line if it + is the only option. e.g.  + ![complete1.gif](../../assets/images/Unix_Shell-Reference_Sheet.gif) + If there are more than one possible completions, pressing tab again + will show all those options. + ![complete2.gif](../../assets/images/Unix_Shell-Reference_Sheet_0.gif) + Use of the tab key can help navigate the filesystem, spellcheck your + commands and save you time typing. \ No newline at end of file diff --git a/docs/Getting_Started/Cheat_Sheets/tmux-Reference_sheet.md b/docs/Getting_Started/Cheat_Sheets/tmux-Reference_sheet.md index 9284ad2bb..b142fb801 100644 --- a/docs/Getting_Started/Cheat_Sheets/tmux-Reference_sheet.md +++ b/docs/Getting_Started/Cheat_Sheets/tmux-Reference_sheet.md @@ -44,11 +44,11 @@ $ tmux attach -t data_transfer Once reattached your session will be where you left it.   You can name the session whatever is most appropriate, such as the task you are performing.  You can run as many sessions as you like and they will -remain until you terminate the tmux session or the node is rebooted. +remain until you terminate the tmux session or the node is rebooted.  Also of note, your session will be available even if your laptop/desktop crashes or the network goes down. - +  More information can be found on the web, here are some good references: @@ -57,3 +57,4 @@ Shortcut keys and cheat sheet: Getting started Guide: +  \ No newline at end of file diff --git a/docs/Getting_Started/Getting_Help/Consultancy.md b/docs/Getting_Started/Getting_Help/Consultancy.md index 9acdfcdcd..8388a41c2 100644 --- a/docs/Getting_Started/Getting_Help/Consultancy.md +++ b/docs/Getting_Started/Getting_Help/Consultancy.md @@ -49,25 +49,25 @@ Some examples of outcomes we could assist with (this list is general and non-exhaustive): - Code development -- Design and develop research software from scratch -- Algorithmic improvements -- Translate Python/R/Matlab code to C/C++/Fortran for faster -execution -- Accelerate code by offloading computations to a GPU -- Develop visualisation and post-processing tools (GUIs, -dashboards, etc) + - Design and develop research software from scratch + - Algorithmic improvements + - Translate Python/R/Matlab code to C/C++/Fortran for faster + execution + - Accelerate code by offloading computations to a GPU + - Develop visualisation and post-processing tools (GUIs, + dashboards, etc) - Performance improvement -- Code optimisation – profile and improve efficiency (speed and -memory), IO performance -- Parallelisation – software (OpenMP, MPI, etc.) and workflow -parallelisation + - Code optimisation – profile and improve efficiency (speed and + memory), IO performance + - Parallelisation – software (OpenMP, MPI, etc.) and workflow + parallelisation - Improve software sustainability (version control, testing, -continuous integration, etc) + continuous integration, etc) - Data Science Engineering -- Optimise numerical performance of machine learning pipelines -- Conduct an Exploratory Data Analysis -- Assist with designing and fitting explanatory and predictive -models + - Optimise numerical performance of machine learning pipelines + - Conduct an Exploratory Data Analysis + - Assist with designing and fitting explanatory and predictive + models - Anything else you can think of ;-) ## What can you expect from us? @@ -76,13 +76,13 @@ During a consultancy project we aim to provide: - Expertise and advice - An agreed timeline to develop or improve a solution (typical -projects are of the order of 1 day per week for up to 4 months but -this is determined on a case-by-case basis) + projects are of the order of 1 day per week for up to 4 months but + this is determined on a case-by-case basis) - Training, knowledge transfer and/or capability development - A summary document outlining what has been achieved during the -project + project - A case study published on our website after the project has been -completed, to showcase the work you are doing on NeSI + completed, to showcase the work you are doing on NeSI ## What is expected of you? @@ -90,18 +90,18 @@ Consultancy projects are intended to be a collaboration and thus some input is required on your part. You should be willing to: - Contribute to a case study upon successful completion of the -consultancy project + consultancy project - Complete a short survey to help us measure the impact of our service - Attend regular meetings (usually via video conference) - Invest time to answer questions, provide code and data as necessary -and make changes to your workflow if needed + and make changes to your workflow if needed - [Acknowledge](https://www.nesi.org.nz/services/high-performance-computing/guidelines/acknowledgement-and-publication) -NeSI in article and code publications that we have contributed to, -which could include co-authorship if our contribution is deemed -worthy + NeSI in article and code publications that we have contributed to, + which could include co-authorship if our contribution is deemed + worthy - Accept full ownership/maintenance of the work after the project -completes (NeSI's involvement in the project is limited to the -agreed timeline) + completes (NeSI's involvement in the project is limited to the + agreed timeline) ## Previous projects @@ -109,68 +109,68 @@ Listed below are some examples of previous projects we have contributed to: - [A quantum casino helps define atoms in the big -chill](https://www.nesi.org.nz/case-studies/quantum-casino-helps-define-atoms-big-chill) + chill](https://www.nesi.org.nz/case-studies/quantum-casino-helps-define-atoms-big-chill) - [Using statistical models to help New Zealand prepare for large -earthquakes](https://www.nesi.org.nz/case-studies/using-statistical-models-help-new-zealand-prepare-large-earthquakes) + earthquakes](https://www.nesi.org.nz/case-studies/using-statistical-models-help-new-zealand-prepare-large-earthquakes) - [Improving researchers' ability to access and analyse climate model -data -sets](https://www.nesi.org.nz/case-studies/improving-researchers-ability-access-and-analyse-climate-model-data-sets) + data + sets](https://www.nesi.org.nz/case-studies/improving-researchers-ability-access-and-analyse-climate-model-data-sets) - [Speeding up the post-processing of a climate model data -pipeline](https://www.nesi.org.nz/case-studies/speeding-post-processing-climate-model-data-pipeline) + pipeline](https://www.nesi.org.nz/case-studies/speeding-post-processing-climate-model-data-pipeline) - [Overcoming data processing overload in scientific web mapping -software](https://www.nesi.org.nz/case-studies/overcoming-data-processing-overload-scientific-web-mapping-software) + software](https://www.nesi.org.nz/case-studies/overcoming-data-processing-overload-scientific-web-mapping-software) - [Visualising ripple effects in riverbed sediment -transport](https://www.nesi.org.nz/case-studies/visualising-ripple-effects-riverbed-sediment-transport) + transport](https://www.nesi.org.nz/case-studies/visualising-ripple-effects-riverbed-sediment-transport) - [New Zealand's first national river flow forecasting system for -flooding -resilience](https://www.nesi.org.nz/case-studies/new-zealand%E2%80%99s-first-national-river-flow-forecasting-system-flooding-resilience) + flooding + resilience](https://www.nesi.org.nz/case-studies/new-zealand%E2%80%99s-first-national-river-flow-forecasting-system-flooding-resilience) - [A fast model for predicting floods and storm -damage](https://www.nesi.org.nz/case-studies/fast-model-predicting-floods-and-storm-damage) + damage](https://www.nesi.org.nz/case-studies/fast-model-predicting-floods-and-storm-damage) - [How multithreading and vectorisation can speed up seismic -simulations by -40%](https://www.nesi.org.nz/case-studies/how-multithreading-and-vectorisation-can-speed-seismic-simulations-40) + simulations by + 40%](https://www.nesi.org.nz/case-studies/how-multithreading-and-vectorisation-can-speed-seismic-simulations-40) - [Machine learning for marine -mammals](https://www.nesi.org.nz/case-studies/machine-learning-marine-mammals) + mammals](https://www.nesi.org.nz/case-studies/machine-learning-marine-mammals) - [Parallel processing for ocean -life](https://www.nesi.org.nz/case-studies/parallel-processing-ocean-life) + life](https://www.nesi.org.nz/case-studies/parallel-processing-ocean-life) - [NeSI support helps keep NZ rivers -healthy](https://www.nesi.org.nz/case-studies/nesi-support-helps-keep-nz-rivers-healthy) + healthy](https://www.nesi.org.nz/case-studies/nesi-support-helps-keep-nz-rivers-healthy) - [Heating up nanowires with -HPC](https://www.nesi.org.nz/case-studies/heating-nanowires-hpc) + HPC](https://www.nesi.org.nz/case-studies/heating-nanowires-hpc) - [The development of next generation weather and climate models is -heating -up](https://www.nesi.org.nz/case-studies/development-next-generation-weather-and-climate-models-heating) + heating + up](https://www.nesi.org.nz/case-studies/development-next-generation-weather-and-climate-models-heating) - [Understanding the behaviours of -light](https://www.nesi.org.nz/case-studies/understanding-behaviours-light) + light](https://www.nesi.org.nz/case-studies/understanding-behaviours-light) - [Getting closer to more accurate climate predictions for New -Zealand](https://www.nesi.org.nz/case-studies/getting-closer-more-accurate-climate-predictions-new-zealand) + Zealand](https://www.nesi.org.nz/case-studies/getting-closer-more-accurate-climate-predictions-new-zealand) - [Fractal analysis of brain signals for autism spectrum -disorder](https://www.nesi.org.nz/case-studies/fractal-analysis-brain-signals-autism-spectrum-disorder) + disorder](https://www.nesi.org.nz/case-studies/fractal-analysis-brain-signals-autism-spectrum-disorder) - [Optimising tools used for genetic -analysis](https://www.nesi.org.nz/case-studies/optimising-tools-used-genetic-analysis) + analysis](https://www.nesi.org.nz/case-studies/optimising-tools-used-genetic-analysis) - [Investigating climate -sensitivity](https://www.nesi.org.nz/case-studies/optimising-tools-used-genetic-analysis) + sensitivity](https://www.nesi.org.nz/case-studies/optimising-tools-used-genetic-analysis) - [Tracking coastal precipitation systems in the -tropics](https://www.nesi.org.nz/case-studies/tracking-coastal-precipitation-systems-tropics) + tropics](https://www.nesi.org.nz/case-studies/tracking-coastal-precipitation-systems-tropics) - [Powering global climate -simulations](https://www.nesi.org.nz/case-studies/powering-global-climate-simulations) + simulations](https://www.nesi.org.nz/case-studies/powering-global-climate-simulations) - [Optimising tools used for genetic -analysis](https://www.nesi.org.nz/case-studies/optimising-tools-used-genetic-analysis) + analysis](https://www.nesi.org.nz/case-studies/optimising-tools-used-genetic-analysis) - [Investigating climate -sensitivity](https://www.nesi.org.nz/case-studies/investigating-climate-sensitivity) + sensitivity](https://www.nesi.org.nz/case-studies/investigating-climate-sensitivity) - [Improving earthquake forecasting -methods](https://www.nesi.org.nz/case-studies/improving-earthquake-forecasting-methods) + methods](https://www.nesi.org.nz/case-studies/improving-earthquake-forecasting-methods) - [Modernising models to diagnose and treat disease and -injury](https://www.nesi.org.nz/case-studies/modernising-models-diagnose-and-treat-disease-and-injury) + injury](https://www.nesi.org.nz/case-studies/modernising-models-diagnose-and-treat-disease-and-injury) - [Cataloguing NZ's earthquake -activities](https://www.nesi.org.nz/case-studies/cataloguing-nz%E2%80%99s-earthquake-activities) + activities](https://www.nesi.org.nz/case-studies/cataloguing-nz%E2%80%99s-earthquake-activities) - [Finite element modelling of biological -cells](https://www.nesi.org.nz/case-studies/finite-element-modelling-biological-cells) + cells](https://www.nesi.org.nz/case-studies/finite-element-modelling-biological-cells) - [Preparing New Zealand to adapt to climate -change](https://www.nesi.org.nz/case-studies/preparing-new-zealand-adapt-climate-change) + change](https://www.nesi.org.nz/case-studies/preparing-new-zealand-adapt-climate-change) - [Using GPUs to expand our understanding of the solar -system](https://www.nesi.org.nz/case-studies/using-gpus-expand-our-understanding-solar-system) + system](https://www.nesi.org.nz/case-studies/using-gpus-expand-our-understanding-solar-system) - [Speeding up Basilisk with -GPGPUs](https://www.nesi.org.nz/case-studies/speeding-basilisk-gpgpus) + GPGPUs](https://www.nesi.org.nz/case-studies/speeding-basilisk-gpgpus) - [Helping communities anticipate flood -events](https://www.nesi.org.nz/case-studies/helping-communities-anticipate-flood-events) \ No newline at end of file + events](https://www.nesi.org.nz/case-studies/helping-communities-anticipate-flood-events) \ No newline at end of file diff --git a/docs/Getting_Started/Getting_Help/Job_efficiency_review.md b/docs/Getting_Started/Getting_Help/Job_efficiency_review.md index d7840d7da..34d9da393 100644 --- a/docs/Getting_Started/Getting_Help/Job_efficiency_review.md +++ b/docs/Getting_Started/Getting_Help/Job_efficiency_review.md @@ -39,26 +39,26 @@ At the end of a job efficiency review you could expect one of the following outcomes: - We determine that your workflow/jobs are running efficiently on our -platform + platform - Some areas for improvement are identified (and agreed with you) -- For "quick wins" we may be able to achieve these improvements -within the scope of the job efficiency review -- For larger pieces of work, we would assist you in applying for a -[NeSI -Consultancy](https://support.nesi.org.nz/hc/en-gb/articles/360000751916-Consultancy) -project, where we would work with you on a longer term project -to implement any agreed changes + - For "quick wins" we may be able to achieve these improvements + within the scope of the job efficiency review + - For larger pieces of work, we would assist you in applying for a + [NeSI + Consultancy](https://support.nesi.org.nz/hc/en-gb/articles/360000751916-Consultancy) + project, where we would work with you on a longer term project + to implement any agreed changes ## What you can expect from us During a job efficiency review you can expect that we will: - Spend some time (typically up to 10-20 hours) to investigate your -software and workflows that you are running on NeSI, to determine -whether there is an opportunity for optimisation or efficiency -improvements + software and workflows that you are running on NeSI, to determine + whether there is an opportunity for optimisation or efficiency + improvements - Communicate clearly and pass on any suggestions for improvements -that we identify + that we identify ## What we expect of you @@ -66,10 +66,10 @@ During a job efficiency review, some input will be required from you, such as: - Investing time to answer questions, provide code and input data as -necessary and make changes to your workflow if needed (this may -involve attending some Zoom meetings and/or email communication) + necessary and make changes to your workflow if needed (this may + involve attending some Zoom meetings and/or email communication) - Setting up some test configurations that we can use for profiling -and benchmarking your jobs; these should be representative of your -work but don't necessarily need to be complete calculations. For -example, with a simulation code we could choose to reduce the number -of time steps but keep the domain size the same. \ No newline at end of file + and benchmarking your jobs; these should be representative of your + work but don't necessarily need to be complete calculations. For + example, with a simulation code we could choose to reduce the number + of time steps but keep the domain size the same. \ No newline at end of file diff --git a/docs/Getting_Started/Getting_Help/NeSI_wide_area_network_connectivity.md b/docs/Getting_Started/Getting_Help/NeSI_wide_area_network_connectivity.md index 1b331f966..c1074cce8 100644 --- a/docs/Getting_Started/Getting_Help/NeSI_wide_area_network_connectivity.md +++ b/docs/Getting_Started/Getting_Help/NeSI_wide_area_network_connectivity.md @@ -23,7 +23,7 @@ NeSI's national platform facilities are connected to the [REANNZ](https://www.reannz.co.nz/) network, Aotearoa's high-performance national digital network (or NREN). This national network supports collaboration and contributions to data-intensive and complex science -and research initiatives in New Zealand and across the globe. +and research initiatives in New Zealand and across the globe.  ## How to verify the status of external (wide area network - WAN) connectivity for NeSI @@ -47,5 +47,6 @@ view of specific addresses, e.g.: +  - +  \ No newline at end of file diff --git a/docs/Getting_Started/Getting_Help/System_status.md b/docs/Getting_Started/Getting_Help/System_status.md index 5ddfd9600..34c94af32 100644 --- a/docs/Getting_Started/Getting_Help/System_status.md +++ b/docs/Getting_Started/Getting_Help/System_status.md @@ -29,7 +29,7 @@ opt-out). The [support.nesi.org.nz](https://support.nesi.org.nz) homepage shows current incidents and upcoming scheduled events (based on status.nesi.org.nz). - +  ## How to manage your subscription to notifications @@ -46,7 +46,7 @@ preferences](https://support.nesi.org.nz/hc/en-gb/articles/4563294188687) ## status.nesi.org.nz NeSI does publish service incidents and scheduled maintenance via -[status.nesi.org.nz](https://status.nesi.org.nz). +[status.nesi.org.nz](https://status.nesi.org.nz).  Interested parties are invited to subscribe to updates (via SMS or email). diff --git a/docs/Getting_Started/Getting_Help/Weekly_Online_Office_Hours.md b/docs/Getting_Started/Getting_Help/Weekly_Online_Office_Hours.md index a0c42fed2..9ada3a871 100644 --- a/docs/Getting_Started/Getting_Help/Weekly_Online_Office_Hours.md +++ b/docs/Getting_Started/Getting_Help/Weekly_Online_Office_Hours.md @@ -19,52 +19,52 @@ zendesk_section_id: 360000164635 [//]: <> (^^^^^^^^^^^^^^^^^^^^) [//]: <> (REMOVE ME IF PAGE VALIDATED) -Have questions about NeSI services? -Looking for tips on how to optimise your HPC jobs? +Have questions about NeSI services?  +Looking for tips on how to optimise your HPC jobs? Or, simply want to meet some of the team behind NeSI Support? We run regular online Office Hours sessions, hosted via Zoom. These sessions are open to anyone - you don't need to be an existing NeSI -user. - +user. + ## **Office Hours in November 2023** Click on the links below to add the date & Zoom link to your calendar: - [**01 November (Wednesday): 9:00-10:00 -AM**](https://calendar.google.com/calendar/event?action=TEMPLATE&tmeid=Nzh1bzhnazNnNGplaTV1YnJjZGlxMTBoNmEgY19oZW42cnIwMmV0MzlrYXQyaG11YW1pZG90c0Bn&tmsrc=c_hen6rr02et39kat2hmuamidots%40group.calendar.google.com) + AM**](https://calendar.google.com/calendar/event?action=TEMPLATE&tmeid=Nzh1bzhnazNnNGplaTV1YnJjZGlxMTBoNmEgY19oZW42cnIwMmV0MzlrYXQyaG11YW1pZG90c0Bn&tmsrc=c_hen6rr02et39kat2hmuamidots%40group.calendar.google.com) - [**08 November (Wednesday): 3:00-4:00 -PM**](https://calendar.google.com/calendar/event?action=TEMPLATE&tmeid=MmVnMGdzb2VtMzYxYnNxaWZicGo3dXQzOHAgY19oZW42cnIwMmV0MzlrYXQyaG11YW1pZG90c0Bn&tmsrc=c_hen6rr02et39kat2hmuamidots%40group.calendar.google.com) + PM**](https://calendar.google.com/calendar/event?action=TEMPLATE&tmeid=MmVnMGdzb2VtMzYxYnNxaWZicGo3dXQzOHAgY19oZW42cnIwMmV0MzlrYXQyaG11YW1pZG90c0Bn&tmsrc=c_hen6rr02et39kat2hmuamidots%40group.calendar.google.com) - [**15 November (Wednesday): 9:00-10:00 -AM**](https://calendar.google.com/calendar/event?action=TEMPLATE&tmeid=NmVwanFvaXJuMmtkbzNrbGZkcmIzdHRla3AgY19oZW42cnIwMmV0MzlrYXQyaG11YW1pZG90c0Bn&tmsrc=c_hen6rr02et39kat2hmuamidots%40group.calendar.google.com) + AM**](https://calendar.google.com/calendar/event?action=TEMPLATE&tmeid=NmVwanFvaXJuMmtkbzNrbGZkcmIzdHRla3AgY19oZW42cnIwMmV0MzlrYXQyaG11YW1pZG90c0Bn&tmsrc=c_hen6rr02et39kat2hmuamidots%40group.calendar.google.com) - [**22 November (Wednesday): 3:00-4:00 -PM**](https://calendar.google.com/calendar/event?action=TEMPLATE&tmeid=NTZlbGplMnFmMGRyMjV2ODluYjhzdGpudDkgY19oZW42cnIwMmV0MzlrYXQyaG11YW1pZG90c0Bn&tmsrc=c_hen6rr02et39kat2hmuamidots%40group.calendar.google.com) + PM**](https://calendar.google.com/calendar/event?action=TEMPLATE&tmeid=NTZlbGplMnFmMGRyMjV2ODluYjhzdGpudDkgY19oZW42cnIwMmV0MzlrYXQyaG11YW1pZG90c0Bn&tmsrc=c_hen6rr02et39kat2hmuamidots%40group.calendar.google.com) - [**29 November (Wednesday): 9:00-10:00 -AM**](https://calendar.google.com/calendar/event?action=TEMPLATE&tmeid=MW5tbmZhNmk4YzMzdTFmN3BudmFwdjRqbWcgY19oZW42cnIwMmV0MzlrYXQyaG11YW1pZG90c0Bn&tmsrc=c_hen6rr02et39kat2hmuamidots%40group.calendar.google.com) + AM**](https://calendar.google.com/calendar/event?action=TEMPLATE&tmeid=MW5tbmZhNmk4YzMzdTFmN3BudmFwdjRqbWcgY19oZW42cnIwMmV0MzlrYXQyaG11YW1pZG90c0Bn&tmsrc=c_hen6rr02et39kat2hmuamidots%40group.calendar.google.com) If you are unable to add an Office Hour session to your calendar through these links, please email us at  and we can send a -calendar invite directly to you. - +calendar invite directly to you.  + ## **How Does It Work** Each session follows a casual 'drop-in / drop-out' format, where you can pop in at any point during the hour and stay for as long or as little as -you'd like. +you'd like.  Also, don't worry if you have a question or challenge that can't be -solved on the spot. +solved on the spot. We can always use the Office Hours to collect some basic information about your issue and then reconnect with you at a later time to troubleshoot things further. - +  ## **Other ways to get help** @@ -74,7 +74,7 @@ have, big or small. You can also find helpful user resources, links and documentation elsewhere in our [User Support Centre](https://support.nesi.org.nz/hc/en-gb). - +  ## **Feedback** @@ -82,3 +82,4 @@ If you have any suggestions for ways to improve these Office Hours sessions, please fill out this [feedback form](https://forms.gle/HELw73FpUQaTYBV6A). +  \ No newline at end of file diff --git a/docs/Getting_Started/Next_Steps/Finding_Job_Efficiency.md b/docs/Getting_Started/Next_Steps/Finding_Job_Efficiency.md index 65c4c77c1..33af0c332 100644 --- a/docs/Getting_Started/Next_Steps/Finding_Job_Efficiency.md +++ b/docs/Getting_Started/Next_Steps/Finding_Job_Efficiency.md @@ -27,7 +27,7 @@ completion, this way you can improve your job specifications in the future. Once your job has finished check the relevant details using the tools: -`nn_seff` or `sacct` For example: +`nn_seff` or `sacct` For example: **nn\_seff** @@ -53,7 +53,7 @@ very low and consideration should be given to reducing memory requests for similar jobs.  If in doubt, please contact for guidance. - +  **sacct** @@ -61,18 +61,18 @@ guidance. sacct --format="JobID,JobName,Elapsed,AveCPU,MinCPU,TotalCPU,Alloc,NTask,MaxRSS,State" -j ``` !!! prerequisite Tip -*If you want to make this your default* `sacct` *setting, run;* -``` sl -echo 'export SACCT_FORMAT="JobID,JobName,Elapsed,AveCPU,MinCPU,TotalCPU,Alloc%2,NTask%2,MaxRSS,State"' >> ~/.bash_profile -source ~/.bash_profile -``` + *If you want to make this your default* `sacct` *setting, run;* + ``` sl + echo 'export SACCT_FORMAT="JobID,JobName,Elapsed,AveCPU,MinCPU,TotalCPU,Alloc%2,NTask%2,MaxRSS,State"' >> ~/.bash_profile + source ~/.bash_profile + ``` ------------------------------------------------------------------------ Below is an output for reference: ``` sl -JobID JobName Elapsed AveCPU MinCPU TotalCPU AllocCPUS NTasks MaxRSS State + JobID JobName Elapsed AveCPU MinCPU TotalCPU AllocCPUS NTasks MaxRSS State ------------ ---------- ---------- ---------- ---------- ---------- ---------- -------- ---------- ---------- 3007056 rfm_ANSYS+ 00:27:07 03:35:55 16 COMPLETED 3007056.bat+ batch 00:27:07 03:35:54 03:35:54 03:35:55 16 1 13658349K COMPLETED @@ -108,7 +108,7 @@ steps, so in this case 13 GB. For our next run we may want to set: the computation hours would be equal to `Elapsed` x `AllocCPUS`. In this case our ideal `TotalCPU` would be 07:12:00, as our job only -managed 03:35:55 we can estimate the CPU usage was around 50% +managed 03:35:55 we can estimate the CPU usage was around 50% It might be worth considering reducing the number of CPUs requested, however bear in mind there are other factors that affect CPU efficiency. @@ -116,7 +116,7 @@ however bear in mind there are other factors that affect CPU efficiency. #SBATCH --cpus-per-task=10 ``` - +  Note: When using sacct to determine the amount of memory your job used - in order to reduce memory wastage - please keep in mind that Slurm @@ -131,15 +131,15 @@ provides a more accurate measure. Further technical notes for those interested in commonly used memory usage metrics on linux systems: -**VSS** >= **RSS** >= **PSS** >= **USS** +**VSS** >= **RSS** >= **PSS** >= **USS** **VSS-Virtual Set Size** - Virtual memory consumption (contains memory -consumed by shared libraries) +consumed by shared libraries) **RSS-Resident Set Size** - Used physical memory (contains memory -consumed by shared libraries) +consumed by shared libraries) **PSS-Proportional Set Size** - Actual physical memory used -(proportional allocation of memory consumed by shared libraries) +(proportional allocation of memory consumed by shared libraries) **USS-Unique Set Size** - Process consumed physical memory alone (does -not contain the memory occupied by the shared library) +not contain the memory occupied by the shared library) `PSS = USS + (RSS/# shared processes)` ## During Runtime @@ -154,9 +154,9 @@ If 'nodelist' is not one of the fields in the output of your `sacct` or command; `squeue -h -o %N -j ` The node will look something like `wbn123` on Mahuika or `nid00123` on Māui !!! prerequisite Note -If your job is using MPI it may be running on multiple nodes + If your job is using MPI it may be running on multiple nodes -### htop +### htop  ``` sl ssh -t wbn175 htop -u $USER @@ -166,7 +166,7 @@ If it is your first time connecting to that particular node, you may be prompted: ``` sl -The authenticity of host can't be established +The authenticity of host can't be established  Are you sure you want to continue connecting (yes/no)? ``` @@ -194,8 +194,8 @@ Processes in green can be ignored **MEM% **Percentage Memory utilisation. !!! prerequisite Warning -If the job finishes, or is killed you will be kicked off the node. If -htop freezes, type `reset` to clear your terminal. + If the job finishes, or is killed you will be kicked off the node. If + htop freezes, type `reset` to clear your terminal. ## Limitations of using CPU Efficiency @@ -214,7 +214,7 @@ more details. ![qdyn\_eff.png](../../assets/images/Finding_Job_Efficiency_0.png) From the above plot of CPU efficiency, you might decide a 5% reduction -of CPU efficiency is acceptable and scale your job up to 18 CPU cores . +of CPU efficiency is acceptable and scale your job up to 18 CPU cores .  ![qdyn\_walltime.png](../../assets/images/Finding_Job_Efficiency_1.png) diff --git a/docs/Getting_Started/Next_Steps/Job_Scaling_Ascertaining_job_dimensions.md b/docs/Getting_Started/Next_Steps/Job_Scaling_Ascertaining_job_dimensions.md index 00e76d37a..18e1d20bf 100644 --- a/docs/Getting_Started/Next_Steps/Job_Scaling_Ascertaining_job_dimensions.md +++ b/docs/Getting_Started/Next_Steps/Job_Scaling_Ascertaining_job_dimensions.md @@ -48,7 +48,7 @@ your project's [fair share score](https://support.nesi.org.nz/hc/en-gb/articles/360000743536) is likely to suffer.  Your project's fair share score will be reduced in view of compute time spent regardless of whether you obtain a result or -not. +not.  @@ -132,7 +132,7 @@ jobs fails due to not asking for enough resources, a small scale job will (hopefully) not have waited for hours or days in the queue beforehand. !!! prerequisite Examples -[Multithreading -Scaling](https://support.nesi.org.nz/hc/en-gb/articles/360001173895) -[MPI -Scaling](https://support.nesi.org.nz/hc/en-gb/articles/360001173875) \ No newline at end of file + [Multithreading + Scaling](https://support.nesi.org.nz/hc/en-gb/articles/360001173895) + [MPI + Scaling](https://support.nesi.org.nz/hc/en-gb/articles/360001173875) \ No newline at end of file diff --git a/docs/Getting_Started/Next_Steps/MPI_Scaling_Example.md b/docs/Getting_Started/Next_Steps/MPI_Scaling_Example.md index 67aaff8ea..31c09a408 100644 --- a/docs/Getting_Started/Next_Steps/MPI_Scaling_Example.md +++ b/docs/Getting_Started/Next_Steps/MPI_Scaling_Example.md @@ -46,11 +46,11 @@ seeds = 60000 #and then split those seeds number equally amoung size groups, #other set seeds and split_seeds to $ if rank == 0: -seeds = np.arange(seeds) -split_seeds = np.array_split(seeds, size, axis = 0) + seeds = np.arange(seeds) + split_seeds = np.array_split(seeds, size, axis = 0) else: -seeds = None -split_seeds = None + seeds = None + split_seeds = None #Scatter the seeds among each MPI task rank_seeds = comm.scatter(split_seeds, root = 0) @@ -64,19 +64,19 @@ rank_data = np.zeros(len(rank_seeds)) #matrix variable #The calculate the dot product of the array with itself for i in np.arange(len(rank_seeds)): -seed = rank_seeds[i] -np.random.seed(seed) -data = np.random.rand(matrix,matrix) -data_mm = np.dot(data, data) -rank_data[i] = sum(sum(data_mm)) + seed = rank_seeds[i] + np.random.seed(seed) + data = np.random.rand(matrix,matrix) + data_mm = np.dot(data, data) + rank_data[i] = sum(sum(data_mm)) rank_sum = sum(rank_data) data_gather = comm.gather(rank_sum, root = 0) if rank == 0: -data_sum = sum(data_gather) -print('Gathered data:', data_gather) -print('Sum:', data_sum) + data_sum = sum(data_gather) + print('Gathered data:', data_gather) + print('Sum:', data_sum) ``` You do not need to understand what the above Pythong script is doing, @@ -113,11 +113,11 @@ seeds = 5000 #and then split those seeds number equally amoung size groups, #other set seeds and split_seeds to $ if rank == 0: -seeds = np.arange(seeds) -split_seeds = np.array_split(seeds, size, axis = 0) + seeds = np.arange(seeds) + split_seeds = np.array_split(seeds, size, axis = 0) else: -seeds = None -split_seeds = None + seeds = None + split_seeds = None #Scatter the seeds among each MPI task rank_seeds = comm.scatter(split_seeds, root = 0) @@ -131,19 +131,19 @@ rank_data = np.zeros(len(rank_seeds)) #matrix variable #The calculate the dot product of the array with itself for i in np.arange(len(rank_seeds)): -seed = rank_seeds[i] -np.random.seed(seed) -data = np.random.rand(matrix,matrix) -data_mm = np.dot(data, data) -rank_data[i] = sum(sum(data_mm)) + seed = rank_seeds[i] + np.random.seed(seed) + data = np.random.rand(matrix,matrix) + data_mm = np.dot(data, data) + rank_data[i] = sum(sum(data_mm)) rank_sum = sum(rank_data) data_gather = comm.gather(rank_sum, root = 0) if rank == 0: -data_sum = sum(data_gather) -print('Gathered data:', data_gather) -print('Sum:', data_sum) + data_sum = sum(data_gather) + print('Gathered data:', data_gather) + print('Sum:', data_sum) ``` Now we need to write a Slurm script to run this job. The wall time, @@ -162,26 +162,26 @@ took to get there. ### Slurm Script ``` sl -#!/bin/bash -e -#SBATCH --job-name=MPIScaling2 -#SBATCH --ntasks=2 -#SBATCH --time=00:30:00 -#SBATCH --mem-per-cpu=512MB - -module load Python -srun python MPIscaling.py + #!/bin/bash -e + #SBATCH --job-name=MPIScaling2 + #SBATCH --ntasks=2 + #SBATCH --time=00:30:00 + #SBATCH --mem-per-cpu=512MB + + module load Python + srun python MPIscaling.py ``` Let's run our Slurm script with sbatch and look at our output from `sacct`. ``` sl -JobID JobName Elapsed TotalCPU Alloc MaxRSS State + JobID JobName Elapsed TotalCPU Alloc MaxRSS State -------------- ------------ ----------- ------------ ----- -------- ---------- -6057011 MPIScaling2 00:18:51 01:14:30 4 COMPLETED -6057011.bat+ batch 00:18:51 00:00.607 4 4316K COMPLETED -6057011.ext+ extern 00:18:52 00:00.001 4 0 COMPLETED -6057011.0 python 00:18:46 01:14:30 2 166744K COMPLETED +6057011 MPIScaling2 00:18:51 01:14:30 4 COMPLETED +6057011.bat+ batch 00:18:51 00:00.607 4 4316K COMPLETED +6057011.ext+ extern 00:18:52 00:00.001 4 0 COMPLETED +6057011.0 python 00:18:46 01:14:30 2 166744K COMPLETED ``` Our job performed 5,000 seeds using 2 physical CPU cores (each MPI task @@ -203,28 +203,28 @@ our script with 2, 3, 4, 5 and 6 MPI tasks/physical CPUs and plot the results: ``` sl -JobID JobName Elapsed TotalCPU Alloc MaxRSS State + JobID JobName Elapsed TotalCPU Alloc MaxRSS State -------------- ------------ ----------- ------------ ----- -------- ---------- -6057011 MPIScaling2 00:18:51 01:14:30 4 COMPLETED -6057011.bat+ batch 00:18:51 00:00.607 4 4316K COMPLETED -6057011.ext+ extern 00:18:52 00:00.001 4 0 COMPLETED +6057011 MPIScaling2 00:18:51 01:14:30 4 COMPLETED +6057011.bat+ batch 00:18:51 00:00.607 4 4316K COMPLETED +6057011.ext+ extern 00:18:52 00:00.001 4 0 COMPLETED 6057011.0 python 00:18:46 01:14:30 2 166744K COMPLETED -6054936 MPIScaling3 00:12:29 01:14:10 6 COMPLETED -6054936.bat+ batch 00:12:29 00:00.512 2 4424K COMPLETED -6054936.ext+ extern 00:12:29 00:00.003 6 0 COMPLETED -6054936.0 python 00:12:29 01:14:09 3 174948K COMPLETED -6054937 MPIScaling4 00:09:29 01:15:04 8 COMPLETED -6054937.bat+ batch 00:09:29 00:00.658 2 4432K COMPLETED -6054937.ext+ extern 00:09:29 00:00.003 8 0 COMPLETED -6054937.0 python 00:09:28 01:15:04 4 182064K COMPLETED -6054938 MPIScaling5 00:07:41 01:15:08 10 COMPLETED -6054938.bat+ batch 00:07:41 00:00.679 2 4548K COMPLETED -6054938.ext+ extern 00:07:41 00:00.005 10 0 COMPLETED -6054938.0 python 00:07:36 01:15:08 5 173632K COMPLETED -6054939 MPIScaling6 00:06:57 01:18:38 12 COMPLETED -6054939.bat+ batch 00:06:57 00:00.609 2 4612K COMPLETED -6054939.ext+ extern 00:06:57 00:00.006 12 44K COMPLETED -6054939.0 python 00:06:51 01:18:37 6 174028K COMPLETED +6054936 MPIScaling3 00:12:29 01:14:10 6 COMPLETED +6054936.bat+ batch 00:12:29 00:00.512 2 4424K COMPLETED +6054936.ext+ extern 00:12:29 00:00.003 6 0 COMPLETED +6054936.0 python 00:12:29 01:14:09 3 174948K COMPLETED +6054937 MPIScaling4 00:09:29 01:15:04 8 COMPLETED +6054937.bat+ batch 00:09:29 00:00.658 2 4432K COMPLETED +6054937.ext+ extern 00:09:29 00:00.003 8 0 COMPLETED +6054937.0 python 00:09:28 01:15:04 4 182064K COMPLETED +6054938 MPIScaling5 00:07:41 01:15:08 10 COMPLETED +6054938.bat+ batch 00:07:41 00:00.679 2 4548K COMPLETED +6054938.ext+ extern 00:07:41 00:00.005 10 0 COMPLETED +6054938.0 python 00:07:36 01:15:08 5 173632K COMPLETED +6054939 MPIScaling6 00:06:57 01:18:38 12 COMPLETED +6054939.bat+ batch 00:06:57 00:00.609 2 4612K COMPLETED +6054939.ext+ extern 00:06:57 00:00.006 12 44K COMPLETED +6054939.0 python 00:06:51 01:18:37 6 174028K COMPLETED ``` ![MPIscalingMem.png](../../assets/images/MPI_Scaling_Example.png) @@ -281,31 +281,31 @@ Looking at the plot of CPUs vs time we can see the asymptotic speedup and this time the best number of CPUs to use for this job looks to be 5 physical CPUs. - +  Now that we have determined that 5 physical CPUs is the optimal number of CPUs for our jobs we will use this as we will submit three more jobs, -using 10,000 15,000 and 20,000 seeds. +using 10,000 15,000 and 20,000 seeds.  ``` sl -JobID JobName Elapsed TotalCPU Alloc MaxRSS State + JobID JobName Elapsed TotalCPU Alloc MaxRSS State -------------- ------------ ----------- ------------ ----- -------- ---------- -6054938 MPIScaling5k 00:07:41 01:15:08 10 COMPLETED -6054938.bat+ batch 00:07:41 00:00.679 2 4548K COMPLETED -6054938.ext+ extern 00:07:41 00:00.005 10 0 COMPLETED -6054938.0 python 00:07:36 01:15:08 5 173632K COMPLETED -6059931 MPIScaling10k 00:14:57 02:27:36 10 COMPLETED -6059931.bat+ batch 00:14:57 00:00.624 10 4320K COMPLETED -6059931.ext+ extern 00:14:57 00:00:00 10 0 COMPLETED -6059931.0 python 00:14:56 02:27:36 5 170748K COMPLETED -6059939 MPIScaling15k 00:22:39 03:45:13 10 COMPLETED -6059939.bat+ batch 00:22:39 00:00.631 10 4320K COMPLETED -6059939.ext+ extern 00:22:39 00:00:00 10 0 COMPLETED -6059939.0 python 00:22:38 03:45:13 5 168836K COMPLETED -6059945 MPIScaling20k 00:30:34 05:02:42 10 COMPLETED -6059945.bat+ batch 00:30:34 00:00.646 10 4320K COMPLETED -6059945.ext+ extern 00:30:34 00:00.001 10 0 COMPLETED -6059945.0 python 00:30:32 05:02:41 5 172700K COMPLETED +6054938 MPIScaling5k 00:07:41 01:15:08 10 COMPLETED +6054938.bat+ batch 00:07:41 00:00.679 2 4548K COMPLETED +6054938.ext+ extern 00:07:41 00:00.005 10 0 COMPLETED +6054938.0 python 00:07:36 01:15:08 5 173632K COMPLETED +6059931 MPIScaling10k 00:14:57 02:27:36 10 COMPLETED +6059931.bat+ batch 00:14:57 00:00.624 10 4320K COMPLETED +6059931.ext+ extern 00:14:57 00:00:00 10 0 COMPLETED +6059931.0 python 00:14:56 02:27:36 5 170748K COMPLETED +6059939 MPIScaling15k 00:22:39 03:45:13 10 COMPLETED +6059939.bat+ batch 00:22:39 00:00.631 10 4320K COMPLETED +6059939.ext+ extern 00:22:39 00:00:00 10 0 COMPLETED +6059939.0 python 00:22:38 03:45:13 5 168836K COMPLETED +6059945 MPIScaling20k 00:30:34 05:02:42 10 COMPLETED +6059945.bat+ batch 00:30:34 00:00.646 10 4320K COMPLETED +6059945.ext+ extern 00:30:34 00:00.001 10 0 COMPLETED +6059945.0 python 00:30:32 05:02:41 5 172700K COMPLETED ``` We can see from the `sacct` output that the wall time seems to be @@ -342,26 +342,26 @@ request 1 GB of memory and 2 hours. ### Revised Slurm Script ``` sl -#!/bin/bash -e -#SBATCH --account=nesi99999 -#SBATCH --job-name=MPIScaling60k -#SBATCH --time=02:00:00 -#SBATCH --mem-per-task=512MB -#SBATCH --ntasks=5 - -module load Python -srun python scaling.R + #!/bin/bash -e + #SBATCH --account=nesi99999 + #SBATCH --job-name=MPIScaling60k + #SBATCH --time=02:00:00 + #SBATCH --mem-per-task=512MB + #SBATCH --ntasks=5 + + module load Python + srun python scaling.R ``` -Checking on our job with `sacct` + Checking on our job with `sacct` ``` sl -JobID JobName Elapsed TotalCPU Alloc MaxRSS State + JobID JobName Elapsed TotalCPU Alloc MaxRSS State -------------- ------------ ----------- ------------ ----- -------- ---------- -6061377 MPIScaling60k 01:28:25 14:35:32 10 COMPLETED -6061377.bat+ batch 01:28:25 00:00.555 10 4320K COMPLETED -6061377.ext+ extern 01:28:25 00:00:00 10 0 COMPLETED -6061377.0 python 01:28:22 14:35:32 5 169060K COMPLETED +6061377 MPIScaling60k 01:28:25 14:35:32 10 COMPLETED +6061377.bat+ batch 01:28:25 00:00.555 10 4320K COMPLETED +6061377.ext+ extern 01:28:25 00:00:00 10 0 COMPLETED +6061377.0 python 01:28:22 14:35:32 5 169060K COMPLETED ``` It looks as though our estimates were accurate in this case, however, diff --git a/docs/Getting_Started/Next_Steps/Moving_files_to_and_from_the_cluster.md b/docs/Getting_Started/Next_Steps/Moving_files_to_and_from_the_cluster.md index b6ac44c40..70c8f19d8 100644 --- a/docs/Getting_Started/Next_Steps/Moving_files_to_and_from_the_cluster.md +++ b/docs/Getting_Started/Next_Steps/Moving_files_to_and_from_the_cluster.md @@ -29,17 +29,17 @@ zendesk_section_id: 360000189716 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Requirements -- Have an [active account and -project.](https://support.nesi.org.nz/hc/en-gb/sections/360000196195-Accounts-Projects) + - Have an [active account and + project.](https://support.nesi.org.nz/hc/en-gb/sections/360000196195-Accounts-Projects) Find more information on the different types of directories [here](https://support.nesi.org.nz/hc/en-gb/articles/360000177256). - +  ## Using the Jupyter interface - +  Many users have found the [Jupyter interface](https://support.nesi.org.nz/hc/en-gb/articles/360001555615-Jupyter-on-NeSI) @@ -47,7 +47,7 @@ very useful for running code on NeSI. The Jupyter interface only requires a web browser; the instructions are same whether your are connecting from a Windows, Mac or a Linux computer. -To upload a file, click on the +To upload a file, click on the  ![](../../assets/images/Moving_files_to_and_from_the_cluster.png) @@ -60,9 +60,9 @@ right-click on the file to see the menu below, The Download button is at the bottom. +  - - +  ## Standard Terminal @@ -83,14 +83,14 @@ Move a file from Mahuika to your local machine. scp mahuika:  ``` !!! prerequisite Note -- This will only work if you have set up aliases as described in -[Terminal -Setup](https://support.nesi.org.nz/hc/en-gb/articles/360000625535-Terminal-Setup-MacOS-Linux-). -- As the terms 'maui' and 'mahuika' are defined locally, the above -commands *only works when using a local terminal* (i.e. not on -Mahuika). -- If you are using Windows subsystem, the root paths are different -as shown by Windows. e.g. `C:` is located at `/mnt/c/` + - This will only work if you have set up aliases as described in + [Terminal + Setup](https://support.nesi.org.nz/hc/en-gb/articles/360000625535-Terminal-Setup-MacOS-Linux-). + - As the terms 'maui' and 'mahuika' are defined locally, the above + commands *only works when using a local terminal* (i.e. not on + Mahuika). + - If you are using Windows subsystem, the root paths are different + as shown by Windows. e.g. `C:` is located at `/mnt/c/` `scp` stands for Secure CoPy and operates in a similar way to regular cp with the source file as the left term and destination on the right. @@ -99,7 +99,7 @@ These commands make use of *multiplexing, *this means that if you already have a connection to the cluster you will not be prompted for your password. -### File Managers +### File Managers  Most file managers can be used to connect to a remote directory simply by typing in the address bar (provided your have an active connection to @@ -143,7 +143,7 @@ authentication. ## Globus Globus is available for those with large amounts of data, security -concerns, or connection consistency issues. +concerns, or connection consistency issues. You can find more details on its use on our [Globus support page](https://support.nesi.org.nz/hc/en-gb/articles/4405623380751-Data-Transfer-using-Globus-V5). @@ -161,30 +161,30 @@ rclone subcommand options source:path dest:path The most frequently used Rclone subcommands: - **rclone copy** – Copy files from the source to the destination, -skipping what has already been copied. + skipping what has already been copied. - **rclone sync** – Make the source and destination identical, -modifying only the destination. + modifying only the destination. - **rclone mov**e – Move files from the source to the destination. - **rclone delete** – Remove the contents of a path. - **rclone mkdir** – Create the path if it does not already exist. - **rclone rmdir** – Remove the path. - **rclone check** – Check if the files in the source and destination -match. + match. - **rclone ls** – List all objects in the path, including size and -path. + path. - **rclone lsd** – List all directories/containers/buckets in the -path. + path. - **rclone lsl** – List all objects in the path, including size, -modification time and path. + modification time and path. - **rclone lsf** – List the objects using the virtual directory -structure based on the object names. + structure based on the object names. - **rclone cat** – Concatenate files and send them to stdout. - **rclone copyto** – Copy files from the source to the destination, -skipping what has already been copied. + skipping what has already been copied. - **rclone moveto** – Move the file or directory from the source to -the destination. + the destination. - **rclone copyurl** – Copy the URL's content to the destination -without saving it in the tmp storage. + without saving it in the tmp storage. A more extensive list can be found on the the [Rclone documentation](https://rclone.org/docs). @@ -192,15 +192,15 @@ documentation](https://rclone.org/docs). ## Rsync Rsync is an utility that provides fast incremental file transfer and -efficient file synchronization between a computer and a storage disk. -The basic command syntax of: +efficient file synchronization between a computer and a storage disk. +The basic command syntax of: ``` sl rsync -options source target ``` If the data source or target location is a remote site, it is defined -with syntax: +with syntax: ``` sl userame@server:/path/in/server @@ -210,8 +210,8 @@ The most frequently used Rsync options: - **-r**                         Recurse into directories - **-a **                       Use archive mode: copy files and -directories recursively and preserve access permissions and time -stamps. + directories recursively and preserve access permissions and time + stamps. - **-v**                        Verbose mode. - **-z**                        Compress - **-e ssh**                 Specify the remote shell to use. @@ -222,3 +222,4 @@ stamps. A more extensive list can be found on the the [Rsync documentation](https://download.samba.org/pub/rsync/rsync.1). +  \ No newline at end of file diff --git a/docs/Getting_Started/Next_Steps/Multithreading_Scaling_Example.md b/docs/Getting_Started/Next_Steps/Multithreading_Scaling_Example.md index 73f8b5980..fea39403b 100644 --- a/docs/Getting_Started/Next_Steps/Multithreading_Scaling_Example.md +++ b/docs/Getting_Started/Next_Steps/Multithreading_Scaling_Example.md @@ -28,14 +28,14 @@ chosen for the purpose of illustration. ## Initial R Script ``` sl -library(doParallel) + library(doParallel) -registerDoParallel(strtoi(Sys.getenv('SLURM_CPUS_PER_TASK'))) + registerDoParallel(strtoi(Sys.getenv('SLURM_CPUS_PER_TASK'))) -# 60,000 calculations to be done: -foreach(z=1000000:1060000) %dopar% { -x <- sum(rnorm(z)) -} + # 60,000 calculations to be done: + foreach(z=1000000:1060000) %dopar% { + x <- sum(rnorm(z)) + } ``` You do not need to understand what the above R script is doing, but for @@ -54,14 +54,14 @@ iterations. So now lets change the number of iterations from 60,000 to ### Revised R Script ``` sl -library(doParallel) + library(doParallel) -registerDoParallel(strtoi(Sys.getenv('SLURM_CPUS_PER_TASK'))) + registerDoParallel(strtoi(Sys.getenv('SLURM_CPUS_PER_TASK'))) -# 5,000 calculations to be done: -foreach(z=1000000:1005000) %dopar% { -x <- sum(rnorm(z)) -} + # 5,000 calculations to be done: + foreach(z=1000000:1005000) %dopar% { + x <- sum(rnorm(z)) + } ``` Now we need to write a Slurm script to run this job. The wall time, @@ -80,21 +80,21 @@ took to get there. ### Slurm Script ``` sl -#!/bin/bash -e -#SBATCH --job-name=Scaling5k -#SBATCH --time=00:10:00 -#SBATCH --mem=512MB -#SBATCH --cpus-per-task=4 - -module load R -Rscript scaling.R + #!/bin/bash -e + #SBATCH --job-name=Scaling5k + #SBATCH --time=00:10:00 + #SBATCH --mem=512MB + #SBATCH --cpus-per-task=4 + + module load R + Rscript scaling.R ``` Let's run our Slurm script with sbatch and look at our output from `sacct`. ``` sl -JobID JobName Elapsed TotalCPU Alloc MaxRSS State + JobID JobName Elapsed TotalCPU Alloc MaxRSS State -------------- ------------ ----------- ------------ ----- -------- ---------- 3106248 Scaling5k 00:03:17 12:51.334 4 COMPLETED 3106248.batch batch 00:03:17 00:00.614 4 4213K COMPLETED @@ -116,7 +116,7 @@ To test this, we will submit three more jobs, using 10,000 15,000 and 20,000 iterations. ``` sl -JobID JobName Elapsed TotalCPU Alloc MaxRSS State + JobID JobName Elapsed TotalCPU Alloc MaxRSS State -------------- ------------ ----------- ------------ ----- -------- ---------- 3106248 Scaling5k 00:03:17 12:51.334 4 COMPLETED 3106248.batch batch 00:03:17 00:00.614 4 4213K COMPLETED @@ -160,8 +160,8 @@ To find out we are going to have to run more tests. Let's try running our script with 2, 4, 6, 8, 10, 12, 14 and 16 CPUs and plot the results: ``` sl -sacct -JobID JobName Elapsed TotalCPU Alloc MaxRSS State + sacct + JobID JobName Elapsed TotalCPU Alloc MaxRSS State -------------- ------------ ----------- ------------ ----- -------- ---------- 3063584 Scaling2 00:06:29 12:49.971 2 COMPLETED 3063584.batch batch 00:06:29 00:00.591 2 4208K COMPLETED @@ -197,7 +197,7 @@ JobID JobName Elapsed TotalCPU Alloc MaxRSS State 3106181.0 Rscript 00:00:59 11:59.998 16 1205991K COMPLETED ``` - +  | | | |-------------------------------------------------------------------------|---------------------------------------------------------------------------| @@ -223,7 +223,7 @@ small. We could try running our script with more than 16 CPU cores, however, in the case of this script we start to have a pretty significant drop in marginal speed-up after eight CPU cores. - +  ![](../../assets/images/Multithreading_Scaling_Example_3.png) @@ -263,21 +263,21 @@ GB of memory. To be on the safe side, let's request 1 GB of memory and ### Revised Slurm Script ``` sl -#!/bin/bash -e -#SBATCH --account=nesi99999 -#SBATCH --job-name=Scaling60k # Job name (shows up in the queue) -#SBATCH --time=00:30:00 # Walltime (HH:MM:SS) -#SBATCH --mem=512MB # Memory per node -#SBATCH --cpus-per-task=8 # Number of cores per task (e.g. OpenMP) - -module load R -Rscript scaling.R + #!/bin/bash -e + #SBATCH --account=nesi99999 + #SBATCH --job-name=Scaling60k # Job name (shows up in the queue) + #SBATCH --time=00:30:00 # Walltime (HH:MM:SS) + #SBATCH --mem=512MB # Memory per node + #SBATCH --cpus-per-task=8 # Number of cores per task (e.g. OpenMP) + + module load R + Rscript scaling.R ``` -Checking on our job with `sacct` + Checking on our job with `sacct`  ``` sl -JobID JobName Elapsed TotalCPU Alloc MaxRSS State + JobID JobName Elapsed TotalCPU Alloc MaxRSS State -------------- ------------ ----------- ------------ ----- -------- ---------- 3119026 Scaling60k 00:20:34 02:41:53 8 COMPLETED 3119026.batch batch 00:20:34 00:01.635 8 4197K COMPLETED diff --git a/docs/Getting_Started/Next_Steps/Parallel_Execution.md b/docs/Getting_Started/Next_Steps/Parallel_Execution.md index 19802116f..6271571fe 100644 --- a/docs/Getting_Started/Next_Steps/Parallel_Execution.md +++ b/docs/Getting_Started/Next_Steps/Parallel_Execution.md @@ -22,19 +22,19 @@ zendesk_section_id: 360000189716 Many scientific software applications are written to take advantage of multiple CPUs in some way. But often this must be specifically requested by the user at the time they run the program, rather than happening -automatically. +automatically. The are three types of parallel execution we will cover are [Multi-Threading(oMP)](#t_multi), [Distributed(MPI)](#t_mpi) and [Job Arrays](#t_array). !!! prerequisite Note -Whenever Slurm mentions CPUs it is referring to *logical* CPU's (**2** -*logical* CPU's = **1** *physical* core). -- `--cpus-per-task=4` will give you 4 *logical* cores. -- `--mem-per-cpu=512MB` will give 512 MB of RAM per *logical* core. -- If `--hint=nomultithread` is used then `--cpus-per-task` will now -refer to physical cores, but `--mem-per-cpu=512MB` still refers to -logical cores. + Whenever Slurm mentions CPUs it is referring to *logical* CPU's (**2** + *logical* CPU's = **1** *physical* core). + - `--cpus-per-task=4` will give you 4 *logical* cores. + - `--mem-per-cpu=512MB` will give 512 MB of RAM per *logical* core. + - If `--hint=nomultithread` is used then `--cpus-per-task` will now + refer to physical cores, but `--mem-per-cpu=512MB` still refers to + logical cores. See [our article on hyperthreading](https://support.nesi.org.nz/hc/en-gb/articles/360000568236) @@ -49,7 +49,7 @@ generally *via* a library such as OpenMP (Open MultiProcessing), TBB -![par.png](../../assets/images/Parallel_Execution.png)* +![par.png](../../assets/images/Parallel_Execution.png)* Fig. 2: Multi-threading involves dividing the process into multiple 'threads' which can be run across multiple cores.* @@ -65,7 +65,7 @@ Example script; #!/bin/bash -e #SBATCH --job-name=MultithreadingTest # job name (shows up in the queue) #SBATCH --time=00:01:00 # Walltime (HH:MM:SS) -#SBATCH --mem=2048MB # memory in MB +#SBATCH --mem=2048MB # memory in MB #SBATCH --cpus-per-task=4 # 2 physical cores per task. taskset -c -p $$ #Prints which CPUs it can use @@ -115,16 +115,16 @@ The expected output being /home/user001/demo ``` !!! prerequisite Warning -For non-MPI programs, either set `--ntasks=1` or do not use `srun` at -all. Using `srun` in conjunction with `--cpus-per-task=1` will -cause `--ntasks` to default to 2. + For non-MPI programs, either set `--ntasks=1` or do not use `srun` at + all. Using `srun` in conjunction with `--cpus-per-task=1` will + cause `--ntasks` to default to 2. ## Job Arrays Job arrays are best used for tasks that are completely independent, such as parameter sweeps, permutation analysis or simulation, that could be executed in any order and don't have to run at the same time. This kind -of work is often described as *embarrassingly parallel*. +of work is often described as *embarrassingly parallel*. An embarrassingly parallel problem is one that requires no communication or dependency between the tasks (unlike distributed computing problems that need communication between tasks). @@ -154,55 +154,55 @@ results `This is result 1` and `This is result 2` respectively. Use of the environment variable `${SLURM_ARRAY_TASK_ID}` is the recommended method of variation between the jobs. For example: -- - - As a direct input to a function. +- - - As a direct input to a function. -``` sl -matlab -nodisplay -r "myFunction(${SLURM_ARRAY_TASK_ID})" -``` + ``` sl + matlab -nodisplay -r "myFunction(${SLURM_ARRAY_TASK_ID})" + ``` -- As an index to an array. + - As an index to an array. -``` sl -inArray=(1 2 4 8 16 32 64 128) -input=${inArray[$SLURM_ARRAY_TASK_ID]} -``` + ``` sl + inArray=(1 2 4 8 16 32 64 128) + input=${inArray[$SLURM_ARRAY_TASK_ID]} + ``` -- For selecting input files. + - For selecting input files. -``` sl -input=inputs/mesh_${SLURM_ARRAY_TASK_ID}.stl -``` + ``` sl + input=inputs/mesh_${SLURM_ARRAY_TASK_ID}.stl + ``` -- As a seed for a pseudo-random number. -- In R + - As a seed for a pseudo-random number. + - In R -``` sl -task_id = as.numeric(Sys.getenv("SLURM_ARRAY_TASK_ID")) -set.seed(task_id) -``` + ``` sl + task_id = as.numeric(Sys.getenv("SLURM_ARRAY_TASK_ID")) + set.seed(task_id) + ``` -- In MATLAB + - In MATLAB -``` sl -task_id = str2num(getenv('SLURM_ARRAY_TASK_ID')) -rng(task_id) -``` + ``` sl + task_id = str2num(getenv('SLURM_ARRAY_TASK_ID')) + rng(task_id) + ``` -* -Using a seed is important, otherwise multiple jobs may -receive the same pseudo-random numbers.* + * + Using a seed is important, otherwise multiple jobs may + receive the same pseudo-random numbers.* -- As an index to an array of filenames. + - As an index to an array of filenames.  -``` sl -files=( inputs/*.dat ) -input=${files[SLURM_ARRAY_TASK_ID]} -# If there are 5 '.dat' files in 'inputs/' you will want to use '#SBATCH --array=0-4' -``` + ``` sl + files=( inputs/*.dat ) + input=${files[SLURM_ARRAY_TASK_ID]} + # If there are 5 '.dat' files in 'inputs/' you will want to use '#SBATCH --array=0-4' + ``` -This example will submit a job array with each job using a -.dat file in 'inputs' as the variable input (in alphabetcial -order). + This example will submit a job array with each job using a + .dat file in 'inputs' as the variable input (in alphabetcial + order). Environment variables *will not work* in the Slurm header. In place of `${SLURM_ARRAY_TASK_ID}`, you can use the token `%a`. This can be @@ -224,7 +224,7 @@ useful for sorting your output files e.g. # Define your dimensions in bash arrays. arr_time=({00..23}) -arr_day=("Mon" "Tue" "Wed" "Thur" "Fri" "Sat" "Sun") +arr_day=("Mon" "Tue" "Wed" "Thur" "Fri" "Sat" "Sun") # Index the bash arrays based on the SLURM_ARRAY_TASK) n_time=${arr_time[$(($SLURM_ARRAY_TASK_ID%${#arr_time[@]}))]} # '%' for finding remainder. @@ -262,3 +262,4 @@ rm -r ../run_${SLURM_ARRAY_TASK_ID}                      The Slurm documentation on job arrays can be found [here](https://slurm.schedmd.com/job_array.html). +  \ No newline at end of file diff --git a/docs/Getting_Started/Next_Steps/Submitting_your_first_job.md b/docs/Getting_Started/Next_Steps/Submitting_your_first_job.md index bee511f83..da3658c32 100644 --- a/docs/Getting_Started/Next_Steps/Submitting_your_first_job.md +++ b/docs/Getting_Started/Next_Steps/Submitting_your_first_job.md @@ -65,16 +65,16 @@ Note: if you are a member of multiple accounts you should add the line ## Testing -We recommend testing your job using the debug Quality of Service (QOS). +We recommend testing your job using the debug Quality of Service (QOS).  The debug QOS can be gained by adding the `sbatch` command line option -`--qos=debug`. +`--qos=debug`. This adds 5000 to the job priority so raises it above all non-debug jobs, but is limited to one small job per user at a time: no more than 15 minutes and no more than 2 nodes. !!! prerequisite Warning -Please do not run your code on the login node.  Any processes running -on the login node for long periods of time or using large numbers of -CPUs will be terminated. + Please do not run your code on the login node.  Any processes running + on the login node for long periods of time or using large numbers of + CPUs will be terminated. ## Submitting @@ -96,7 +96,7 @@ Documentation](https://slurm.schedmd.com/sbatch.html) ## Job Queue -The currently queued jobs can be checked using +The currently queued jobs can be checked using  ``` sl squeue @@ -132,9 +132,9 @@ sacct -S YYYY-MM-DD Each job will show as multiple lines, one line for the parent job and then additional lines for each job step. !!! prerequisite Tips -sacct -X Only show parent processes. -sacct --state=PENDING/RUNNING/FAILED/CANCELLED/TIMEOUT Filter jobs by -state. + sacct -X Only show parent processes. + sacct --state=PENDING/RUNNING/FAILED/CANCELLED/TIMEOUT Filter jobs by + state. You can find more details on its use on the [Slurm Documentation](https://slurm.schedmd.com/sacct.html) @@ -144,9 +144,9 @@ Documentation](https://slurm.schedmd.com/sacct.html) scancel <jobid> will cancel the job described by <jobid>. You can obtain the job ID by using sacct or squeue. !!! prerequisite Tips -scancel -u \[username\] Kill all jobs submitted by you. -scancel {\[n1\]..\[n2\]} Kill all jobs with an id between \[n1\] and -\[n2\] + scancel -u \[username\] Kill all jobs submitted by you. + scancel {\[n1\]..\[n2\]} Kill all jobs with an id between \[n1\] and + \[n2\] You can find more details on its use on the [Slurm Documentation](https://slurm.schedmd.com/scancel.html) diff --git a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-0-1.md b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-0-1.md index 02ce574d1..56c1a6a74 100644 --- a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-0-1.md +++ b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-0-1.md @@ -25,13 +25,13 @@ zendesk_section_id: 360001091155 ## New and Improved - An updated web application is introducing a -[navigation](https://support.nesi.org.nz/hc/en-gb/articles/360003592875) -in the sidebar and links to important functions + [navigation](https://support.nesi.org.nz/hc/en-gb/articles/360003592875) + in the sidebar and links to important functions - Improved [project application -form](https://support.nesi.org.nz/hc/en-gb/articles/360003648716) -with automatic draft state so you can continue the application at a -later stage without the need to re-enter details + form](https://support.nesi.org.nz/hc/en-gb/articles/360003648716) + with automatic draft state so you can continue the application at a + later stage without the need to re-enter details - Moved the account profile to a dedicated page @@ -42,7 +42,7 @@ later stage without the need to re-enter details Fixed: Email address validation supports 'modern' domains when requesting a virtual home account. - +  ## Release Update - 18. May 2021 diff --git a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-0-3.md b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-0-3.md index 38ddfdb2e..8b25bf9cb 100644 --- a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-0-3.md +++ b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-0-3.md @@ -25,13 +25,14 @@ zendesk_section_id: 360001091155 ## New and Improved - Improved the "Reset NeSI HPC Account Password" form to clear values -after submission. + after submission. - Lowered the time until a user can reset the password and adjusted -the feedback message to be more meaningful for users, if the change -was not successful . + the feedback message to be more meaningful for users, if the change + was not successful . ## Fixes Fixed: details in password and 2FA reset email message (IP address, device) are displayed again. +  \ No newline at end of file diff --git a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-1-0.md b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-1-0.md index b16249143..942278c4b 100644 --- a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-1-0.md +++ b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-1-0.md @@ -31,3 +31,4 @@ zendesk_section_id: 360001091155 Fixed: user affiliation not correct after first login to my.nesi. +  \ No newline at end of file diff --git a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-10-0.md b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-10-0.md index 45b1e31e1..6ed031c36 100644 --- a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-10-0.md +++ b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-10-0.md @@ -26,5 +26,5 @@ zendesk_section_id: 360001091155 - Updated dependencies - Added release notes to the UI - accessible via the 'hamburger' menu -on the right + on the right - Added grant/award details for NeSI project in Project view \ No newline at end of file diff --git a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-12-0.md b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-12-0.md index 09588d231..d2b43892f 100644 --- a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-12-0.md +++ b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-12-0.md @@ -25,14 +25,15 @@ zendesk_section_id: 360001091155 ## New and Improved - Added a banner to make users aware in case there is already a -current allocation request for the project before raising another -one + current allocation request for the project before raising another + one - Added details to make users aware of their HPC account status when -attempting to reset the MFA/2FA token + attempting to reset the MFA/2FA token ## Fixes Fixed: using 'Compute Units' instead of 'Core Hours' for Mahuika +  - +  \ No newline at end of file diff --git a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-13-0.md b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-13-0.md index 973147b4f..3276726e3 100644 --- a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-13-0.md +++ b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-13-0.md @@ -25,19 +25,19 @@ zendesk_section_id: 360001091155 ## New and Improved - When adding a new allocation request, new option to add the related -grants funding the project if missing. + grants funding the project if missing. - If a project has no compute allocation linked to it, a banner has -been added to make users aware of the missing allocation request and -the visual indicators have been made clearer. + been added to make users aware of the missing allocation request and + the visual indicators have been made clearer. - Added links for the access policy and the acceptable use policy in -the footer. - + the footer. +  ## Fixes - When adding a new allocation request, if an existing allocation -request is present, an alert message is visible: a cross has been -added to allow for the warning message to be closed. \ No newline at end of file + request is present, an alert message is visible: a cross has been + added to allow for the warning message to be closed. \ No newline at end of file diff --git a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-14-0.md b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-14-0.md index 919334d2e..f67d52174 100644 --- a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-14-0.md +++ b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-14-0.md @@ -25,12 +25,12 @@ zendesk_section_id: 360001091155 ## New and Improved - New option to opt-out of CC'ing all project members when requesting -a renewed allocation. + a renewed allocation. - Only sending one notification message of end of allocation per -project covering both Mahuika and Māui allocations - + project covering both Mahuika and Māui allocations +  ## Fixes diff --git a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-15-0.md b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-15-0.md index f4cf8dee1..b5332e19a 100644 --- a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-15-0.md +++ b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-15-0.md @@ -25,25 +25,25 @@ zendesk_section_id: 360001091155 ## New and Improved - New Allocation Request page has been improved to add field -validations + validations - Links to information about notifications have been added under -Account > My Profile, when the Account Profile is edited + Account > My Profile, when the Account Profile is edited - If my.nesi.org.nz portal cannot connect to NeSI server, a -descriptive error message will be displayed - + descriptive error message will be displayed +  ## Fixes - New allocation request start date rules updated when the project -does not have a current allocation, today date will be used as the -default value + does not have a current allocation, today date will be used as the + default value - New allocation request interval between the start date and end date -should be around one year but with the quarter splits, it has been -increased to 15 months + should be around one year but with the quarter splits, it has been + increased to 15 months - Inactive users won't receive email notifications when a new -allocation request is created + allocation request is created - Double clicks have been displayed on submit buttons to avoid -duplicate items being created \ No newline at end of file + duplicate items being created \ No newline at end of file diff --git a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-16-0.md b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-16-0.md index 7fa2f59d0..ed44c6371 100644 --- a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-16-0.md +++ b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-16-0.md @@ -26,10 +26,10 @@ zendesk_section_id: 360001091155 - Ability to add new affiliation from Account > My Profile menu - Display of the full organisation with department (if relevant) when -creating an new allocation request + creating an new allocation request - Addition of progress bars - +  ## Fixes diff --git a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-17-0.md b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-17-0.md index 08de70a1b..1c4ae728f 100644 --- a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-17-0.md +++ b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-17-0.md @@ -25,13 +25,13 @@ zendesk_section_id: 360001091155 ## New and Improved - “Provide Feedback” is now redirected to - + - In a project request form, all grants are now visible by default - After submitting a new allocation request, the user will be able to -see the Zendesk link in case further comments need to be added + see the Zendesk link in case further comments need to be added - Addition of Keycloak packages for future need - Make the date and citation fields mandatory for new research output -entries + entries ## Fixes diff --git a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-18-0.md b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-18-0.md index e11aadc8d..795150b84 100644 --- a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-18-0.md +++ b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-18-0.md @@ -25,29 +25,29 @@ zendesk_section_id: 360001091155 ## New and Improved - A link to [NeSI's privacy -policy](https://www.nesi.org.nz/about-us/security-privacy/privacy-policy) -has been added to the bottom of all pages of my.nesi environment + policy](https://www.nesi.org.nz/about-us/security-privacy/privacy-policy) + has been added to the bottom of all pages of my.nesi environment - We've shifted from using Tuakiri's RapidConnect service to Tuakiri's -OpenID Connect bridge to improve overall security of my.nesi's user -authentication process. + OpenID Connect bridge to improve overall security of my.nesi's user + authentication process. - We've updated the display features of the table showing Merit grants -available to researchers in order to improve our ability to make -changes and future updates to the table's information. + available to researchers in order to improve our ability to make + changes and future updates to the table's information. ## Fixes - Fixed a crash that used to occur when a user wanted to join a -project on my.nesi and delete an entry within that project. + project on my.nesi and delete an entry within that project. - Fixed a security vulnerability in the my.nesi environment related to -the libwebp library, a code library used to render and display -images in the *WebP* format. + the libwebp library, a code library used to render and display + images in the *WebP* format.  - Updated the allocation request form's end date message, restricting -allocation requests to no further than one year in the future. + allocation requests to no further than one year in the future. - Changed which system components from NeSI's System Status page -*()* are default notifications emailed -to users. Users can customise their system status email -notifications at any time. [Read more about that -here](https://support.nesi.org.nz/hc/en-gb/articles/8202966997775). + *()* are default notifications emailed + to users. Users can customise their system status email + notifications at any time. [Read more about that + here](https://support.nesi.org.nz/hc/en-gb/articles/8202966997775). If you have any questions about any of the improvements or fixes, please [contact NeSI Support](mailto:support@nesi.org.nz). \ No newline at end of file diff --git a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-2-0.md b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-2-0.md index 740db1ce5..d19e354b7 100644 --- a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-2-0.md +++ b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-2-0.md @@ -27,9 +27,9 @@ zendesk_section_id: 360001091155 - Added NeSI allocations list to project details view - Improved feedback for users without active projects - Improved validation of phone number formats (incl. international -prefix) + prefix) - Improved account profile form to create more clarity about mandatory -fields + fields ## Fixes diff --git a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-3-0.md b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-3-0.md index 85ec3e0ee..ee14d9d2a 100644 --- a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-3-0.md +++ b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-3-0.md @@ -26,6 +26,7 @@ zendesk_section_id: 360001091155 - Added project member list to project details view - Added ability to manage project members (add/remove, assign -role) for project owners + role) for project owners - Added feedback link to the 'user name' menu +  \ No newline at end of file diff --git a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-4-0.md b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-4-0.md index f2a072f4d..f1bef6c0e 100644 --- a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-4-0.md +++ b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-4-0.md @@ -26,7 +26,7 @@ zendesk_section_id: 360001091155 - UI layout changes for project details view - Rendering a basic usage data for compute (and storage if quota -information available) + information available) - Added contextual links for support article references ## Fixes diff --git a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-5-0.md b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-5-0.md index b091b90c1..da2019bc6 100644 --- a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-5-0.md +++ b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-5-0.md @@ -25,11 +25,11 @@ zendesk_section_id: 360001091155 ## New and Improved - Introducing allocation renewal requests for project owners to be -made from my.nesi.org.nz + made from my.nesi.org.nz - Apply for Access form changes -- Allowing password reset regardless of current project membership +- Allowing password reset regardless of current project membership  ## Fixes - Improved reliability to manage project members (so changes will be -reflected in the HPC system in 30-60 minutes) \ No newline at end of file + reflected in the HPC system in 30-60 minutes) \ No newline at end of file diff --git a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-6-0.md b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-6-0.md index 9a51c927e..a3d685905 100644 --- a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-6-0.md +++ b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-6-0.md @@ -25,6 +25,6 @@ zendesk_section_id: 360001091155 ## New and Improved - Introducing NeSI Notification Preferences to create more -transparency for users + transparency for users - Apply for Access form improvements to make naming project members -more consistent \ No newline at end of file + more consistent \ No newline at end of file diff --git a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-7-0.md b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-7-0.md index 9fe4a5852..445e3223a 100644 --- a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-7-0.md +++ b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-7-0.md @@ -25,9 +25,9 @@ zendesk_section_id: 360001091155 ## New and Improved - Notify users mentioned in project applications (if there is no NeSI -account linked to the users email address). + account linked to the users email address). - Send email notification to affected users when project membership -status is changed (by the project owner). + status is changed (by the project owner). - Added optional banner for e.g. holiday announcements. \ No newline at end of file diff --git a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-8-0.md b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-8-0.md index ae9a6f194..3da1c0ea5 100644 --- a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-8-0.md +++ b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-8-0.md @@ -25,9 +25,9 @@ zendesk_section_id: 360001091155 ## New and Improved - Improved [NeSI Notification -Preferences](https://support.nesi.org.nz/hc/en-gb/articles/4563294188687) -to be project-specific + Preferences](https://support.nesi.org.nz/hc/en-gb/articles/4563294188687) + to be project-specific - Improved [allocation renewal -requests](https://support.nesi.org.nz/hc/en-gb/articles/4600222769295) -by providing more context \ No newline at end of file + requests](https://support.nesi.org.nz/hc/en-gb/articles/4600222769295) + by providing more context \ No newline at end of file diff --git a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-9-0.md b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-9-0.md index 1589baed5..d195416bb 100644 --- a/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-9-0.md +++ b/docs/Getting_Started/Release_Notes_my-nesi-org-nz/my-nesi-org-nz_release_notes_v2-9-0.md @@ -25,6 +25,6 @@ zendesk_section_id: 360001091155 ## New and Improved - Improved [allocation renewal -requests](https://support.nesi.org.nz/hc/en-gb/articles/4600222769295) default -organisation selection + requests](https://support.nesi.org.nz/hc/en-gb/articles/4600222769295) default + organisation selection - Added a sub-section to list open allocation requests \ No newline at end of file diff --git a/docs/Getting_Started/my-nesi-org-nz/Logging_in_to_my-nesi-org-nz.md b/docs/Getting_Started/my-nesi-org-nz/Logging_in_to_my-nesi-org-nz.md index f9fcd898c..16737a43f 100644 --- a/docs/Getting_Started/my-nesi-org-nz/Logging_in_to_my-nesi-org-nz.md +++ b/docs/Getting_Started/my-nesi-org-nz/Logging_in_to_my-nesi-org-nz.md @@ -32,7 +32,7 @@ Most New Zealand universities and Crown Research Institutes are members of the [Tuakiri authentication federation](https://www.reannz.co.nz/products-and-services/tuakiri/join/), but many other institutions, including private sector organisations and -most central and local government agencies, are not. +most central and local government agencies, are not.  See also [Creating a NeSI Account Profile](https://support.nesi.org.nz/hc/en-gb/articles/360000159715) @@ -42,7 +42,7 @@ Profile](https://support.nesi.org.nz/hc/en-gb/articles/360000159715) In case your organisation is not part of the Tuakiri federated identity management service, a user can still [request a NeSI Account profile.](https://my.nesi.org.nz/html/request_nesi_account) NeSI will -(if approved) provision a so-called "virtual home account" on Tuakiri. +(if approved) provision a so-called "virtual home account" on Tuakiri.  See also [Account Requests for non-Tuakiri Members](https://support.nesi.org.nz/hc/en-gb/articles/360000216035) @@ -84,7 +84,7 @@ value of your auEduPersonSharedToken as proffered by your institution's identity provision service and its value as recorded in the NeSI database (more common), you will not be able to log in to My NeSI. If you cannot log in, please raise a support ticket with your institutions -IT support. +IT support. For troubleshooting the support team may ask you for a PDF of your Tuakiri attributes. Tuakiri does not include your password in the attribute printout and there is no security risk involved in providing a diff --git a/docs/Getting_Started/my-nesi-org-nz/Managing_notification_preferences.md b/docs/Getting_Started/my-nesi-org-nz/Managing_notification_preferences.md index 533ffd198..6db3bdde6 100644 --- a/docs/Getting_Started/my-nesi-org-nz/Managing_notification_preferences.md +++ b/docs/Getting_Started/my-nesi-org-nz/Managing_notification_preferences.md @@ -21,13 +21,13 @@ zendesk_section_id: 360001059296 ## Overview -NeSI aims to keep users informed via various communication channels. +NeSI aims to keep users informed via various communication channels.  ## Checking and setting your preferences Within [my.nesi.org.nz](https://my.nesi.org.nz/account/preference) you can find a summary of the current subscriptions under NeSI Notification -Preferences. +Preferences.  In order to manage your subscription to notifications, either log into [my.nesi](https://my.nesi.org.nz/account/preference) or use the link @@ -40,9 +40,9 @@ notifications. ![2022-04-12\_16-46-56.png](../../assets/images/Managing_notification_preferences.png) +  - - +  ### See also diff --git a/docs/Getting_Started/my-nesi-org-nz/Navigating_the_my-nesi-org-nz_web_interface.md b/docs/Getting_Started/my-nesi-org-nz/Navigating_the_my-nesi-org-nz_web_interface.md index 0f2b52bb5..c99bca649 100644 --- a/docs/Getting_Started/my-nesi-org-nz/Navigating_the_my-nesi-org-nz_web_interface.md +++ b/docs/Getting_Started/my-nesi-org-nz/Navigating_the_my-nesi-org-nz_web_interface.md @@ -29,14 +29,14 @@ can be found at the bottom of the sidebar. ### Breadcrumb navigation A breadcrumb navigation is displayed when viewing sections of the -site. +site. Example: Home / Projects / List Project ## Collapsible elements The triple bar (or hamburger) icons allow elements to be collapsed or revealed. The left icon does collapse the sidebar and therefore hides -the navigation elements contained. +the navigation elements contained. The triple bar on the right is used for future functions. The **<** arrow icon on the bottom of the sidebar does minimise the diff --git a/docs/Getting_Started/my-nesi-org-nz/Requesting_to_renew_an_allocation_via_my-nesi-org-nz.md b/docs/Getting_Started/my-nesi-org-nz/Requesting_to_renew_an_allocation_via_my-nesi-org-nz.md index 8bb781cfc..ca3dd5baf 100644 --- a/docs/Getting_Started/my-nesi-org-nz/Requesting_to_renew_an_allocation_via_my-nesi-org-nz.md +++ b/docs/Getting_Started/my-nesi-org-nz/Requesting_to_renew_an_allocation_via_my-nesi-org-nz.md @@ -23,17 +23,17 @@ zendesk_section_id: 360001059296 ## How to raise a request using my.nesi.org.nz? 1. Login to and select a project -from the list. -![Screenshot 2023-08-07 at 15-21-11 -my.nesi.png](../../assets/images/Requesting_to_renew_an_allocation_via_my-nesi-org-nz.png) + from the list. + ![Screenshot 2023-08-07 at 15-21-11 + my.nesi.png](../../assets/images/Requesting_to_renew_an_allocation_via_my-nesi-org-nz.png) 2. Click the Plus button icon 'action' next to the compute allocation -line item -![Screenshot 2023-08-07 at 15-21-51 -my.nesi.png](../../assets/images/Requesting_to_renew_an_allocation_via_my-nesi-org-nz_0.png) + line item  + ![Screenshot 2023-08-07 at 15-21-51 + my.nesi.png](../../assets/images/Requesting_to_renew_an_allocation_via_my-nesi-org-nz_0.png) 3. Verify the preset values and add a comment in case you update -some. -Finally, click 'Submit' -![mceclip2.png](../../assets/images/Requesting_to_renew_an_allocation_via_my-nesi-org-nz_1.png) + some. + Finally, click 'Submit'  + ![mceclip2.png](../../assets/images/Requesting_to_renew_an_allocation_via_my-nesi-org-nz_1.png) ### Can I request any allocation size? @@ -47,12 +47,13 @@ differ from our forecast. Please be aware that: - First and subsequent allocations are subject to the NeSI allocation -size and duration limits in force at the time they are considered by -our reviewers. + size and duration limits in force at the time they are considered by + our reviewers. - An allocation from an institution's entitlement is subject to -approval by that institution. + approval by that institution. See [Project Extensions and New Allocations on Existing Projects](https://support.nesi.org.nz/hc/en-gb/articles/360000202196) for more details. +  \ No newline at end of file diff --git a/docs/Getting_Started/my-nesi-org-nz/The_NeSI_Project_Request_Form.md b/docs/Getting_Started/my-nesi-org-nz/The_NeSI_Project_Request_Form.md index bec6f4e1c..6c8551b9f 100644 --- a/docs/Getting_Started/my-nesi-org-nz/The_NeSI_Project_Request_Form.md +++ b/docs/Getting_Started/my-nesi-org-nz/The_NeSI_Project_Request_Form.md @@ -30,22 +30,22 @@ an in-progress request (draft) that you previously started are described below. 1. Point your web browser to -[https://my.nesi.org.nz](https://my.nesi.org.nz/projects/apply) and -login. Select "Apply for Access" from the sidebar navigation on the -left. -![mceclip1.png](../../assets/images/The_NeSI_Project_Request_Form.png) + [https://my.nesi.org.nz](https://my.nesi.org.nz/projects/apply) and + login. Select "Apply for Access" from the sidebar navigation on the + left. + ![mceclip1.png](../../assets/images/The_NeSI_Project_Request_Form.png) 2. Choose from the following items: -- **If you are returning to continue work on a draft request** you -started earlier, choose the link based on the date/time or title -you've set. -- **For a new project request, select "Start a new -application". **Note, this is the default in case there is not -draft request for your account. + - **If you are returning to continue work on a draft request** you + started earlier, choose the link based on the date/time or title + you've set. + - **For a new project request, select "Start a new + application". **Note, this is the default in case there is not + draft request for your account. 3. When your request is ready to submit, progress through the form -sections using the 'Next' button at the bottom of the page until you -reach the 'Summary' section. After clicking the 'Submit' button and -passing the validation the request is submitted for review. You will -also receive a confirmation via email. + sections using the 'Next' button at the bottom of the page until you + reach the 'Summary' section. After clicking the 'Submit' button and + passing the validation the request is submitted for review. You will + also receive a confirmation via email. ### Saving a Request for Later @@ -58,7 +58,8 @@ The request can only be successfully submitted once all mandatory data has been entered. The final section in the form 'Summary' will highlight missing data and allow you to navigate back to the relevant section. - +  ' +  \ No newline at end of file diff --git a/docs/Getting_Started/my-nesi-org-nz/Tuakiri_Attribute_Validator.md b/docs/Getting_Started/my-nesi-org-nz/Tuakiri_Attribute_Validator.md index 2b721c95c..e886b539b 100644 --- a/docs/Getting_Started/my-nesi-org-nz/Tuakiri_Attribute_Validator.md +++ b/docs/Getting_Started/my-nesi-org-nz/Tuakiri_Attribute_Validator.md @@ -49,7 +49,7 @@ value of your auEduPersonSharedToken as proffered by your institution's identity provision service and its value as recorded in the NeSI database (more common), you will not be able to log in to My NeSI. If you cannot log in, please raise a support ticket with your institutions -IT support. +IT support.  For troubleshooting the support team may ask you for a PDF of your Tuakiri attributes. Tuakiri does not include your password in the diff --git a/docs/NeSI_Service_Subscriptions/Contracts_and_billing_processes/Billing_process.md b/docs/NeSI_Service_Subscriptions/Contracts_and_billing_processes/Billing_process.md index 6cfb09cc9..59a028789 100644 --- a/docs/NeSI_Service_Subscriptions/Contracts_and_billing_processes/Billing_process.md +++ b/docs/NeSI_Service_Subscriptions/Contracts_and_billing_processes/Billing_process.md @@ -20,13 +20,13 @@ zendesk_section_id: 7348936006031 [//]: <> (REMOVE ME IF PAGE VALIDATED) Charges for Subscription usage are typically invoiced on a quarterly -basis. - +basis. + If your organisation requires a Purchase Order (PO) Number be used for invoices, the PO Number must be provided to us upon signing your Subscription service agreement. - +  If you have any questions about Subscription billing processes, don’t hesitate to [get in touch](mailto:info@nesi.org.nz). \ No newline at end of file diff --git a/docs/NeSI_Service_Subscriptions/Contracts_and_billing_processes/Types_of_contracts.md b/docs/NeSI_Service_Subscriptions/Contracts_and_billing_processes/Types_of_contracts.md index 5c8f149b1..80252eeef 100644 --- a/docs/NeSI_Service_Subscriptions/Contracts_and_billing_processes/Types_of_contracts.md +++ b/docs/NeSI_Service_Subscriptions/Contracts_and_billing_processes/Types_of_contracts.md @@ -22,21 +22,21 @@ zendesk_section_id: 7348936006031 Typically our Subscription contracts are based on one-year terms and invoiced on a quarterly basis\*. -- *HPC Platform: -*This is an inclusive package that provides access to all NeSI -services (compute, storage, support, consultancy, etc.) in one -Subscription so that you have the flexibility at any point during -the term of the contract to use any service. You are only charged -for what you use. - -- *National Data Transfer Platform - Membership & managed endpoint: -*A National Data Transfer Platform is delivered through a -partnership between NeSI, REANNZ, Globus, and research institutions -across the country. [Read more -here](https://www.nesi.org.nz/services/data-services). As a managed -endpoint, your institution can provide secure, reliable, and fast -data transfer to/from NeSI’s HPC Platform as well as to/from other -Globus endpoints across New Zealand and internationally. +- *HPC Platform: + *This is an inclusive package that provides access to all NeSI + services (compute, storage, support, consultancy, etc.) in one + Subscription so that you have the flexibility at any point during + the term of the contract to use any service. You are only charged + for what you use. + +- *National Data Transfer Platform - Membership & managed endpoint: + *A National Data Transfer Platform is delivered through a + partnership between NeSI, REANNZ, Globus, and research institutions + across the country. [Read more + here](https://www.nesi.org.nz/services/data-services). As a managed + endpoint, your institution can provide secure, reliable, and fast + data transfer to/from NeSI’s HPC Platform as well as to/from other + Globus endpoints across New Zealand and internationally.  We are also happy to discuss other custom options for things like our Training service. @@ -45,7 +45,7 @@ If you have any questions or would like to discuss our subscription terms or invoicing schedule in more detail, [get in touch](mailto:info@nesi.org.nz). - +  *\*National Data Transfer Platform service agreements are invoiced upon contract signing, as the fee is a one-time, annual, up-front fee.* \ No newline at end of file diff --git a/docs/NeSI_Service_Subscriptions/Overview/Pricing.md b/docs/NeSI_Service_Subscriptions/Overview/Pricing.md index 84d7fc1ef..be6524f4d 100644 --- a/docs/NeSI_Service_Subscriptions/Overview/Pricing.md +++ b/docs/NeSI_Service_Subscriptions/Overview/Pricing.md @@ -27,7 +27,7 @@ Subscriptions: Prices are reviewed annually and subject to change. - +  ## Current pricing @@ -36,8 +36,9 @@ page](https://www.nesi.org.nz/community/partners-pricing#subscriptions) on the NeSI website for the latest pricing information. The website page will always display the current pricing of a NeSI service. - +  If you have any questions about anything mentioned on this page, don’t hesitate to [get in touch](mailto:info@nesi.org.nz). +  \ No newline at end of file diff --git a/docs/NeSI_Service_Subscriptions/Overview/Questions.md b/docs/NeSI_Service_Subscriptions/Overview/Questions.md index 1cfbbda43..ad159c968 100644 --- a/docs/NeSI_Service_Subscriptions/Overview/Questions.md +++ b/docs/NeSI_Service_Subscriptions/Overview/Questions.md @@ -26,7 +26,7 @@ Visit our Services sections on the NeSI website for more details on the ways we're supporting New Zealand research communities through: - [High Performance Computing & -Analytics](https://www.nesi.org.nz/services/high-performance-computing-and-data-analytics) + Analytics](https://www.nesi.org.nz/services/high-performance-computing-and-data-analytics) - [Consultancy](https://www.nesi.org.nz/services/consultancy) @@ -34,3 +34,4 @@ Analytics](https://www.nesi.org.nz/services/high-performance-computing-and-data- - [Training](https://www.nesi.org.nz/services/training) +  \ No newline at end of file diff --git a/docs/NeSI_Service_Subscriptions/Overview/What_is_a_Subscription.md b/docs/NeSI_Service_Subscriptions/Overview/What_is_a_Subscription.md index 85a9d6807..68adfca2d 100644 --- a/docs/NeSI_Service_Subscriptions/Overview/What_is_a_Subscription.md +++ b/docs/NeSI_Service_Subscriptions/Overview/What_is_a_Subscription.md @@ -25,34 +25,34 @@ researchers to access our services to build your research capabilities. Subscribing to NeSI's services provides you with: - Managed entitlements on NeSI's HPC platform for your research -projects and programmes. This includes access to: + projects and programmes. This includes access to: -- high-capacity CPUs, GPUs and high memory nodes + - high-capacity CPUs, GPUs and high memory nodes -- interactive computing via Jupyter Notebooks, containers, and -virtual lab environments + - interactive computing via Jupyter Notebooks, containers, and + virtual lab environments -- an extensive pre-built software library + - an extensive pre-built software library -- data storage for compute, from the highest performance storage -on NeSI's HPC platform or via NeSI's long-term storage Nearline -Service + - data storage for compute, from the highest performance storage + on NeSI's HPC platform or via NeSI's long-term storage Nearline + Service - Personalised support to assist your researchers across the breadth -of their NeSI experience, from applying for projects/allocations, to -getting their projects running on the platform, to general -troubleshooting or experimenting with new tools or techniques + of their NeSI experience, from applying for projects/allocations, to + getting their projects running on the platform, to general + troubleshooting or experimenting with new tools or techniques - Secure, reliable, and fast data transfer to/from NeSI’s HPC Platform -as well as to/from other Globus endpoints across New Zealand and -internationally + as well as to/from other Globus endpoints across New Zealand and + internationally  - Dedicated scientific and HPC-focused computational and data science -support + support - Advice and partnership on capability-building initiatives for -research communities or collaboration on advanced research computing -strategies + research communities or collaboration on advanced research computing + strategies If you have any questions about anything mentioned on this page, don’t hesitate to [get in touch](mailto:info@nesi.org.nz). \ No newline at end of file diff --git a/docs/NeSI_Service_Subscriptions/Service_Governance/Allocation_approvals.md b/docs/NeSI_Service_Subscriptions/Service_Governance/Allocation_approvals.md index 670069999..0a6488fe8 100644 --- a/docs/NeSI_Service_Subscriptions/Service_Governance/Allocation_approvals.md +++ b/docs/NeSI_Service_Subscriptions/Service_Governance/Allocation_approvals.md @@ -27,17 +27,18 @@ time (in this case, the term of the Subscription). Depending on the nature of your Subscription, you can choose to directly manage allocation approvals for projects from your institution or leave those decisions to the NeSI Support Team. See examples of some common -approval scenarios below. +approval scenarios below. + - -**Subscription used by a single project** +**Subscription used by a single project** If there is only one project associated with a Subscription, the contract’s full resource entitlement can be allocated to that single project (with the division of resources such as storage vs compute left to the discretion of the project owner). -**Subscription used by multiple projects** +**Subscription used by multiple projects** If the Subscription contract covers multiple projects, the contract entitlement is split between the projects. The Subscriber can decide how it should be split, or NeSI’s Support Team can review the project -requests and make a recommendation based on project requirements. +requests and make a recommendation based on project requirements. + \ No newline at end of file diff --git a/docs/NeSI_Service_Subscriptions/Service_Governance/Service_Governance_contact.md b/docs/NeSI_Service_Subscriptions/Service_Governance/Service_Governance_contact.md index ef353a8ea..327a07e8e 100644 --- a/docs/NeSI_Service_Subscriptions/Service_Governance/Service_Governance_contact.md +++ b/docs/NeSI_Service_Subscriptions/Service_Governance/Service_Governance_contact.md @@ -24,7 +24,7 @@ Governance Contact on behalf of your institution. The role of this person includes: - acting as a primary liaison / contact person on behalf of your -institution for anything related to the service Subscription + institution for anything related to the service Subscription - approving allocation requests for projects from your institution diff --git a/docs/NeSI_Service_Subscriptions/Service_Governance/Subscriber_Monthly_Usage_Reports.md b/docs/NeSI_Service_Subscriptions/Service_Governance/Subscriber_Monthly_Usage_Reports.md index 47a6e6394..f0a032bc8 100644 --- a/docs/NeSI_Service_Subscriptions/Service_Governance/Subscriber_Monthly_Usage_Reports.md +++ b/docs/NeSI_Service_Subscriptions/Service_Governance/Subscriber_Monthly_Usage_Reports.md @@ -40,7 +40,7 @@ follow up on system-related requests. The monthly emails also include any timely updates on other NeSI news or training events that would be of interest to your research community. - +  ## How to read your Subscriber Usage Report @@ -49,31 +49,31 @@ reference, and compare recent and past usage. - at the top of each tab is a summary of the contract, indicating the: -- term of agreement (contract start and end dates) + - term of agreement (contract start and end dates) -- maximum contracted value + - maximum contracted value -- value of services utilised to date + - value of services utilised to date - usage for each service is shown in the corresponding sections below - in cases where a service has differently priced resources (eg. -Compute pricing varies across our CPU and GPU resources), we will -also indicate additional information (eg. “Type of CPU” and “Type of -GPU”) so you have a breakdown of what usage contributes to the total -chargeable costs that month. See the Pricing section above for more -information on our service pricing. + Compute pricing varies across our CPU and GPU resources), we will + also indicate additional information (eg. “Type of CPU” and “Type of + GPU”) so you have a breakdown of what usage contributes to the total + chargeable costs that month. See the Pricing section above for more + information on our service pricing. - to showcase full value delivered through NeSI services, our reports -will also show usage that is not chargeable (eg. Merit usage). This -is shown simply for information purposes and is not included or -reflected on invoices. + will also show usage that is not chargeable (eg. Merit usage). This + is shown simply for information purposes and is not included or + reflected on invoices. Usage reports generally ready to view by the middle of the following month. So, for example, January usage will appear as a new tab by mid- to late February. - +  If you have any questions about anything mentioned on this page, don’t hesitate to [get in touch](mailto:info@nesi.org.nz). \ No newline at end of file diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Build_an_Apptainer_container_on_a_Milan_compute_node.md b/docs/Scientific_Computing/HPC_Software_Environment/Build_an_Apptainer_container_on_a_Milan_compute_node.md index 9bb14183f..039f1bcd5 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/Build_an_Apptainer_container_on_a_Milan_compute_node.md +++ b/docs/Scientific_Computing/HPC_Software_Environment/Build_an_Apptainer_container_on_a_Milan_compute_node.md @@ -42,8 +42,8 @@ cat << EOF > my_container.def BootStrap: docker From: ubuntu:20.04 %post -apt-get -y update -apt-get install -y wget + apt-get -y update + apt-get install -y wget EOF ``` @@ -89,16 +89,16 @@ More information about how to submit a Slurm job is available in the job](https://support.nesi.org.nz/hc/en-gb/articles/360000684396) support page. !!! prerequisite Build environment variables -To build containers, you need to ensure that Apptainer has enough -storage space to create intermediate files. It also requires a cache -folder to save images pulled from a different location (e.g. -DockerHub). By default both of these locations are set to `/tmp` which -has limited space, large builds may exceed this limitation causing the -builder to crash. The environment variables `APPTAINER_TMPDIR` and -`APPTAINER_CACHEDIR` are used to overwrite the default location of -these directories. -In this example, the Slurm job submission script creates these folders -using your project `nobackup` folder. + To build containers, you need to ensure that Apptainer has enough + storage space to create intermediate files. It also requires a cache + folder to save images pulled from a different location (e.g. + DockerHub). By default both of these locations are set to `/tmp` which + has limited space, large builds may exceed this limitation causing the + builder to crash. The environment variables `APPTAINER_TMPDIR` and + `APPTAINER_CACHEDIR` are used to overwrite the default location of + these directories. + In this example, the Slurm job submission script creates these folders + using your project `nobackup` folder. ## Known limitations @@ -118,6 +118,6 @@ While making image from oci registry: error fetching image to cache: while build it is likely due to an upstream issue (e.g. bad image on Dockerhub). In this case, try an older image version or a different base image. !!! prerequisite Other limitations -This method, using fakeroot, is known to **not** work for all types of -Apptainer/Singularity containers. -If you encounter an issue, please contact us at . \ No newline at end of file + This method, using fakeroot, is known to **not** work for all types of + Apptainer/Singularity containers. + If you encounter an issue, please contact us at . \ No newline at end of file diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Compiling_software_on_Mahuika.md b/docs/Scientific_Computing/HPC_Software_Environment/Compiling_software_on_Mahuika.md index ba9049d57..aef7786c2 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/Compiling_software_on_Mahuika.md +++ b/docs/Scientific_Computing/HPC_Software_Environment/Compiling_software_on_Mahuika.md @@ -39,9 +39,9 @@ The GNU and Intel compilers can be accessed by loading one of the toolchains: - `module load gimkl/2020a` - the default toolchain, providing GNU -compilers (version 9.2.0), Intel MPI and Intel MKL + compilers (version 9.2.0), Intel MPI and Intel MKL - `module load intel/2020a` - Intel compilers (version 2020.0.166), -Intel MPI and Intel MKL + Intel MPI and Intel MKL A large number of dependencies are built against these toolchains, so they are usually a good place to start when building your own software. @@ -71,33 +71,33 @@ developers of the software you plan to use. Nevertheless, the following should give you an impression which steps you usually need to consider: 1. Change into your desired source code directory. We suggest you use -`/nesi/project/`, or more typically one of its -subdirectories. You may instead use `/nesi/nobackup/` (or -one of its subdirectories) if you don't mind the software not being -backed up and prone to automatic deletion in certain circumstances. + `/nesi/project/`, or more typically one of its + subdirectories. You may instead use `/nesi/nobackup/` (or + one of its subdirectories) if you don't mind the software not being + backed up and prone to automatic deletion in certain circumstances. 2. Download the source code. This could be done via a repository -checkout (`git clone `) or via downloading a tarball -(`wget `). + checkout (`git clone `) or via downloading a tarball + (`wget `). 3. Ensure the tarball is not a tarbomb, -using `tar tf | sort | less` (`tar tzf ...` if the -source code is a gzipped tarball, `tar tjf ...` if a bzip2 -compressed tarball). If you find that the tarball is in fact a -tarbomb, you will need to handle it using special techniques. + using `tar tf | sort | less` (`tar tzf ...` if the + source code is a gzipped tarball, `tar tjf ...` if a bzip2 + compressed tarball). If you find that the tarball is in fact a + tarbomb, you will need to handle it using special techniques. 4. Unpack the tarball using `tar xf `. Change into the -source directory. + source directory. 5. Load the preferred toolchain (or compiler module) and modules for -any additional required libraries (`module load gimkl FFTW`) + any additional required libraries (`module load gimkl FFTW`) 6. Run the configure script with appropriate options, -e.g. `./configure --prefix= --use-fftw=$EBROOTFFTW  `(options -can usually be listed using `./configure --help`) + e.g. `./configure --prefix= --use-fftw=$EBROOTFFTW  `(options + can usually be listed using `./configure --help`) 7. In some applications you need to adjust the `Makefile` (generated by -`configure`) to reflect your preferred compiler, and library options -(see below) + `configure`) to reflect your preferred compiler, and library options + (see below) 8. Compile the code (`make``)` 9. install the binaries and libraries into the specified directory -(`make install`) - + (`make install`) +  ## Compilers @@ -105,22 +105,22 @@ Compilers are provided for Fortran, C, and C++. For MPI-parallelised code, different compilers typically need to be used. The different **compilers** are listed: ------------------------------------------------------------------------------ -Language Cray Intel GNU ------------ --------------------- --------------------- --------------------- -Fortran ftn ifort gfortran + ----------------------------------------------------------------------------- + Language Cray Intel GNU + ----------- --------------------- --------------------- --------------------- + Fortran ftn ifort gfortran -Fortran + ftn mpiifort mpif90 -MPI + Fortran + ftn mpiifort mpif90 + MPI -C cc icc gcc + C cc icc gcc -C + MPI cc mpiicc mpicc + C + MPI cc mpiicc mpicc -C++ CC icpc g++ + C++ CC icpc g++ -C++ + MPI CC mpiicpc mpicxx ------------------------------------------------------------------------------ + C++ + MPI CC mpiicpc mpicxx + ----------------------------------------------------------------------------- **Note**, Cray uses compiler wrappers which are described [later in more detail](#cray-programming-environment). @@ -142,35 +142,35 @@ change them if you decide to switch compilers. The following table provides a list of commonly used compiler **options** for the different compilers: ------------------------------------------------------------------------------------------------------------------------- -Group Cray Intel GNU Notes ---------------- -------------------------------- -------------------- ---------------------------------- --------------- -Debugging `-g` or `-G{0,1,2,fast}` `-g` or `-g or -g{0,1,2,3}` Set level of -`-debug [keyword]` debugging -information, -some levels may -disable certain -compiler -optimisations + ------------------------------------------------------------------------------------------------------------------------ + Group Cray Intel GNU Notes + --------------- -------------------------------- -------------------- ---------------------------------- --------------- + Debugging `-g` or `-G{0,1,2,fast}` `-g` or `-g or -g{0,1,2,3}` Set level of + `-debug [keyword]` debugging + information, + some levels may + disable certain + compiler + optimisations -Light compiler `-O2` `-O2` `-O2` -optimisation + Light compiler `-O2` `-O2` `-O2`   + optimisation -Aggressive `-O3 -hfp3` `-O3 -ipo` `-O3 -ffast-math -funroll-loops` This may affect -compiler numerical -optimisation accuracy + Aggressive `-O3 -hfp3` `-O3 -ipo` `-O3 -ffast-math -funroll-loops` This may affect + compiler numerical + optimisation accuracy -Architecture Load this module first: `-xHost` `-march=native -mtune=native` Build and -specific `module load craype-broadwell` compute nodes -optimisation have the same -architecture -(Broadwell) + Architecture Load this module first: `-xHost` `-march=native -mtune=native` Build and + specific `module load craype-broadwell` compute nodes + optimisation have the same + architecture + (Broadwell) -Vectorisation `-hlist=m` `-qopt-report` `-fopt-info-vec` or -reports `-fopt-info-missed` + Vectorisation `-hlist=m` `-qopt-report` `-fopt-info-vec` or   + reports `-fopt-info-missed` -OpenMP `-homp` (default) `-qopenmp` `-fopenmp` ------------------------------------------------------------------------------------------------------------------------- + OpenMP `-homp` (default) `-qopenmp` `-fopenmp` + ------------------------------------------------------------------------------------------------------------------------ Additional compiler options are documented in the compiler man pages, e.g. `man mpicc`, which are available *after* loading the related @@ -178,14 +178,14 @@ compiler module. Additional documentation can be also found at the vendor web pages: - [Cray Fortran -v8.7](https://pubs.cray.com/content/S-3901/8.7/cray-fortran-reference-manual/fortran-compiler-introduction), -[Cray C and C++ -v8.7](https://pubs.cray.com/content/S-2179/8.7/cray-c-and-c++-reference-manual/invoke-the-c-and-c++-compilers) + v8.7](https://pubs.cray.com/content/S-3901/8.7/cray-fortran-reference-manual/fortran-compiler-introduction), + [Cray C and C++ + v8.7](https://pubs.cray.com/content/S-2179/8.7/cray-c-and-c++-reference-manual/invoke-the-c-and-c++-compilers) - [Intel Parallel Studio XE Cluster -Edition](https://software.intel.com/en-us/node/685016) for Linux is -installed on the Mahuika HPC Cluster, Mahuika Ancillary Nodes + Edition](https://software.intel.com/en-us/node/685016) for Linux is + installed on the Mahuika HPC Cluster, Mahuika Ancillary Nodes - [Intel Developer -Guides](https://software.intel.com/en-us/documentation/view-all?search_api_views_fulltext=¤t_page=0&value=78151,83039;20813,80605,79893,20812,20902;20816;20802;20804) + Guides](https://software.intel.com/en-us/documentation/view-all?search_api_views_fulltext=¤t_page=0&value=78151,83039;20813,80605,79893,20812,20902;20816;20802;20804) - [GCC Manuals](https://gcc.gnu.org/onlinedocs/) **Note**: Cray uses compiler wrappers. To list the compiler options, @@ -207,13 +207,13 @@ compile the program. In general, to link against an external package, one must specify: - The location of the header files, using the option -`-I/path/to/headers` + `-I/path/to/headers` - The location of the compiled library or libraries, using -`-L/path/to/lib/` + `-L/path/to/lib/` - The name of each library, typically without prefixes and suffixes. -For example, if the full library file name is `libfoo.so.1.2.3` -(with aliases `libfoo.so.1` and `libfoo.so`), the expected entry on -the link line is `-lfoo`. + For example, if the full library file name is `libfoo.so.1.2.3` + (with aliases `libfoo.so.1` and `libfoo.so`), the expected entry on + the link line is `-lfoo`. Thus the linker expects to find the include headers in the */path/to/headers* and the library at */path/to/lib/lib.so* (we assume @@ -270,7 +270,7 @@ used library should be build with the same compiler. adding the location to the MPI library. This can be observed calling e.g. `mpif90 -showme` -### Common Linker Problems +### Common Linker Problems Linking can easily go wrong. Most often, you will see linker errors about "missing symbols" when the linker could not find a function used @@ -279,38 +279,38 @@ resolve this problem, have a closer look at the function names that the linker reported: - Are you missing some object code files (these are compiled source -files and have suffix `.o`) that should appear on the linker line? -This can happen if the build system was not configured correctly or -has a bug. Try running the linking step manually with all source -files and debug the build system (which can be a lengthy and -cumbersome process, unfortunately). + files and have suffix `.o`) that should appear on the linker line? + This can happen if the build system was not configured correctly or + has a bug. Try running the linking step manually with all source + files and debug the build system (which can be a lengthy and + cumbersome process, unfortunately). - Do the missing functions have names that contain "mp" or "omp"? This -could mean that some of your source files or external libraries were -built with OpenMP support, which requires you to set an OpenMP flag -(`-fopenmp` for GNU compilers, `-qopenmp` for Intel) in your linker -command. For the Cray compilers, OpenMP is enabled by default and -can be controlled using `-h[no]omp`. + could mean that some of your source files or external libraries were + built with OpenMP support, which requires you to set an OpenMP flag + (`-fopenmp` for GNU compilers, `-qopenmp` for Intel) in your linker + command. For the Cray compilers, OpenMP is enabled by default and + can be controlled using `-h[no]omp`. - Do you see a very long list of complex-looking function names, and -does your source code or external library dependency include C++ -code? You may need to explicitly link against the C++ standard -library (`-lstdc++` for GNU and Cray compilers, `-cxxlib` for Intel -compilers); this is a particularly common problem for statically -linked code. + does your source code or external library dependency include C++ + code? You may need to explicitly link against the C++ standard + library (`-lstdc++` for GNU and Cray compilers, `-cxxlib` for Intel + compilers); this is a particularly common problem for statically + linked code. - Do the function names end with an underscore ("\_")? You might be -missing some Fortran code, either from your own sources or from a -library that was written in Fortran, or parts of your Fortran code -were built with flags such as `-assume nounderscore` (Intel) or -`-fno-underscoring` (GNU), while others were using different flags -(note that the Cray compiler always uses underscores). + missing some Fortran code, either from your own sources or from a + library that was written in Fortran, or parts of your Fortran code + were built with flags such as `-assume nounderscore` (Intel) or + `-fno-underscoring` (GNU), while others were using different flags + (note that the Cray compiler always uses underscores). - Do the function names end with double underscores ("\_\_")? Fortran -compilers offer an option to add double underscores to Fortran -subroutine names for compatibility reasons -(`-h [no]second_underscore`, `-assume [no]2underscores`, -`-f[no-]second-underscore`) which you may have to add or remove. + compilers offer an option to add double underscores to Fortran + subroutine names for compatibility reasons + (`-h [no]second_underscore`, `-assume [no]2underscores`, + `-f[no-]second-underscore`) which you may have to add or remove. - Compiler not necessarily enable preprocessing, which could result in -`#ifndef VAR; Warning: Illegal preprocessor directive`. For example, -using preprocessor directives in `.f` files with gfortran requires -the `-cpp` option. + `#ifndef VAR; Warning: Illegal preprocessor directive`. For example, + using preprocessor directives in `.f` files with gfortran requires + the `-cpp` option. Note that the linker requires that function names match exactly, so any variation in function name in your code will lead to a "missing symbols" diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Configuring_Dask_MPI_jobs.md b/docs/Scientific_Computing/HPC_Software_Environment/Configuring_Dask_MPI_jobs.md index 3562ddb19..1a2af907e 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/Configuring_Dask_MPI_jobs.md +++ b/docs/Scientific_Computing/HPC_Software_Environment/Configuring_Dask_MPI_jobs.md @@ -20,14 +20,14 @@ zendesk_section_id: 360000040056 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Start simple -The technique explained in this page should be considered **after** -trying simpler single node options (e.g.  [Dask Distributed -LocalCluster](https://docs.dask.org/en/stable/deploying-python.html)), -if -- you need more cores than what is available on a single node, -- or your queuing time is too long. -Note that using MPI to distribute computations on multiple nodes can -have an impact on performances, compared to a single node setting. + The technique explained in this page should be considered **after** + trying simpler single node options (e.g.  [Dask Distributed + LocalCluster](https://docs.dask.org/en/stable/deploying-python.html)), + if + - you need more cores than what is available on a single node, + - or your queuing time is too long. + Note that using MPI to distribute computations on multiple nodes can + have an impact on performances, compared to a single node setting. [Dask](https://dask.org/) is a popular Python package for parallelising workflows. It can use a variety of parallelisation backends, including @@ -78,19 +78,19 @@ request mpi4py with the Intel MPI distribution as follows: ``` sl name: myenvironment channels: -- myfavouritechannel -- intel + - myfavouritechannel + - intel dependencies: -- mypackage -- anotherpackage -- intel::mpi4py -- dask-mpi + - mypackage + - anotherpackage + - intel::mpi4py + - dask-mpi ``` !!! prerequisite See also -See the -[Miniconda3](https://support.nesi.org.nz/hc/en-gb/articles/360001580415) -page for more information on how to create and manage Miniconda -environments on NeSI. + See the + [Miniconda3](https://support.nesi.org.nz/hc/en-gb/articles/360001580415) + page for more information on how to create and manage Miniconda + environments on NeSI. ## Configuring Slurm @@ -102,7 +102,7 @@ then assigns different roles to the different ranks: - Rank 0 becomes the scheduler that coordinates work and communication - Rank 1 becomes the worker that executes the main Python program and -hands out workloads + hands out workloads - Ranks 2 and above become additional workers that run workloads This implies that **Dask-MPI jobs must be launched on at least 3 MPI @@ -116,7 +116,7 @@ a short test workload with and without hyperthreading. In the following, two cases will be discussed: 1. The worker ranks use little memory and they do not use -parallelisation themselves + parallelisation themselves 2. The worker ranks use a lot of memory and/or parallelisation Note that Slurm will place different MPI ranks on different nodes on the @@ -185,10 +185,10 @@ dm.initialize(local_directory=os.getcwd()) # Define two simple test functions def inc(x): -return x + 1 + return x + 1 def add(x, y): -return x + y + return x + y client = dd.Client() @@ -247,9 +247,9 @@ While it is impossible to cover every possible scenario, the following guidelines should help with configuring the container correctly. 1. Make sure that the Intel MPI version of the "mpi4py" package is -installed with Dask-MPI + installed with Dask-MPI 2. The correct version of Python and the Intel MPI distribution need to -be loaded at runtime. + be loaded at runtime. Here is an example of a minimal Singularity container definition file: @@ -258,22 +258,22 @@ Bootstrap: docker From: continuumio/miniconda3:latest %post -conda install -y -n base -c intel mpi4py -conda install -y -n base -c conda-forge dask-mpi + conda install -y -n base -c intel mpi4py + conda install -y -n base -c conda-forge dask-mpi %runscript -. $(conda info --base)/etc/profile.d/conda.sh -conda activate base -python "$@" + . $(conda info --base)/etc/profile.d/conda.sh + conda activate base + python "$@" ``` where the `%runscript` section ensures that the Python script passed to `singularity run` is executed using the Python interpreter of the base Conda environment inside the container. !!! prerequisite Tips -You can build this container on NeSI, using the Mahuika Extension -nodes, following the instructions from the [dedicated support -page](https://support.nesi.org.nz/hc/en-gb/articles/6008779241999). + You can build this container on NeSI, using the Mahuika Extension + nodes, following the instructions from the [dedicated support + page](https://support.nesi.org.nz/hc/en-gb/articles/6008779241999). ### Slurm configuration diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Finding_Software.md b/docs/Scientific_Computing/HPC_Software_Environment/Finding_Software.md index 2db7c81bd..8096172aa 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/Finding_Software.md +++ b/docs/Scientific_Computing/HPC_Software_Environment/Finding_Software.md @@ -45,11 +45,11 @@ place of `module`. With Lmod you can: - Use “spider” to search for modules, e.g. “Python” modules: - Load a module: - Prefix a module with “-“ to unload it, e.g. switch from Python 2 to -Python 3: + Python 3: - To get a fresh environment, we recommend that you log out and log in -again. By logging out and logging in again you will revert to not -only the default set of modules, but also the default set of -environment variables. + again. By logging out and logging in again you will revert to not + only the default set of modules, but also the default set of + environment variables. Further information can be found in the online [User Guide for Lmod](https://lmod.readthedocs.io/en/latest/010_user.html). @@ -71,7 +71,7 @@ NOTE: The substring search will be soon implemented by default, then you do not need to specify the -S anymore. Furthermore, this improvement should be also ported to the Māui\_Ancil part. - +  NOTE: you can create your own modules. This is described [here](https://support.nesi.org.nz/hc/en-gb/articles/360000474535-Installing-Third-Party-applications). \ No newline at end of file diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Installing_Third_Party_applications.md b/docs/Scientific_Computing/HPC_Software_Environment/Installing_Third_Party_applications.md index 43a6b27e6..ebf33c4c3 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/Installing_Third_Party_applications.md +++ b/docs/Scientific_Computing/HPC_Software_Environment/Installing_Third_Party_applications.md @@ -24,13 +24,13 @@ Nevertheless, if you need additional applications or libraries (below called package), we distinguish: - you need a **newer version** of an already installed package: [ask -NeSI support](https://support.nesi.org.nz/hc/en-gb/requests/new) for -an update + NeSI support](https://support.nesi.org.nz/hc/en-gb/requests/new) for + an update - you need an **older version** of an installed package: please use -the Easybuild installation procedure (below) to install it into your -working space + the Easybuild installation procedure (below) to install it into your + working space - you want to test a **new (not installed)** package: below we -collected some hints, how you can install it in your user space. + collected some hints, how you can install it in your user space. In any case, if you have issues, do not hesitate to [open a ticket](https://support.nesi.org.nz/hc/en-gb/requests/new) and ask NeSI @@ -51,22 +51,22 @@ Nevertheless, the following should give you an impression which steps you usually need to consider: - Change into a desired source code directory. We suggest to use -`/nesi/nobackup/` or `/nesi/project/` + `/nesi/nobackup/` or `/nesi/project/` - download the source code. This could be done via a repository -checkout (`git clone `) or -via downloading a tarball (`wget `). Unpack the -tarball using `tar xf `. Change into source -directory. + checkout (`git clone `) or + via downloading a tarball (`wget `). Unpack the + tarball using `tar xf `. Change into source + directory. - load compiler module and modules for additional libraries -(`module load gimkl FFTW`) + (`module load gimkl FFTW`) - run the configure with appropriate options -`./configure --prefix= --use-fftw=$EBROOTFFTW  `(options -can be listed using `./configure --help`) + `./configure --prefix= --use-fftw=$EBROOTFFTW  `(options + can be listed using `./configure --help`) - In other applications you need to adjust the provided `Makefile` to -reflect compiler, and library options (see below) + reflect compiler, and library options (see below) - compile code (`make``)` - install the binaries and libraries into the specified directory -(`make install`) + (`make install`) ## Create your own modules (Optional) @@ -105,7 +105,8 @@ The module then can be loaded by: These modules can easily be shared with collaborators. They just need to specify the last two steps. +  +  - - +  \ No newline at end of file diff --git a/docs/Scientific_Computing/HPC_Software_Environment/NICE_DCV_Setup.md b/docs/Scientific_Computing/HPC_Software_Environment/NICE_DCV_Setup.md index 6b63f4b78..5451f0b2a 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/NICE_DCV_Setup.md +++ b/docs/Scientific_Computing/HPC_Software_Environment/NICE_DCV_Setup.md @@ -52,43 +52,43 @@ possible. 1. Log in to the appropriate host. -### On Māui + ### On Māui -1. Connect to the lander node following the instructions -[here](https://support.nesi.org.nz/hc/en-gb/sections/360000034315-Accessing-the-HPCs). -For example: + 1. Connect to the lander node following the instructions + [here](https://support.nesi.org.nz/hc/en-gb/sections/360000034315-Accessing-the-HPCs). + For example: -``` sl -ssh lander -``` + ``` sl + ssh lander + ``` -2. Connect from the lander node to one of the NICE DCV server -nodes: + 2. Connect from the lander node to one of the NICE DCV server + nodes: -``` sl -ssh w-ndcv01 -``` + ``` sl + ssh w-ndcv01 + ``` -#### On Mahuika + ### On Mahuika -1. Connect to the Mahuika login node: + 1. Connect to the Mahuika login node: -``` sl -ssh mahuika -``` + ``` sl + ssh mahuika + ``` -2. Connect to the NICE DCV server node (not yet available): + 2. Connect to the NICE DCV server node (not yet available): -``` sl -ssh vgpuwbg005 -``` + ``` sl + ssh vgpuwbg005 + ``` 2. Create a new NICE DCV session, replacing `` with a -session name of your choice: + session name of your choice: -``` sl -dcv create-session -``` + ``` sl + dcv create-session + ``` ## Establishing an SSH tunnel @@ -98,76 +98,76 @@ must create an SSH tunnel through the NeSI lander node. ### Linux, Mac, or Windows Subsystem for Linux !!! prerequisite Warning -If successful, commands to open SSH tunnels will look like they are -doing nothing (hanging) but it is important to leave them running. -Once you kill a relevant SSH tunnel connection (e.g. `Ctrl-c`) you -will no longer be able to connect to your NICE DCV session. + If successful, commands to open SSH tunnels will look like they are + doing nothing (hanging) but it is important to leave them running. + Once you kill a relevant SSH tunnel connection (e.g. `Ctrl-c`) you + will no longer be able to connect to your NICE DCV session. 1. On your machine run the following command in your Linux terminal -emulator (assuming you added the -[recommended](https://support.nesi.org.nz/hc/en-gb/articles/360000625535-Recommended-Terminal-Setup) -sections to your `~/.ssh/config` file). This command opens an SSH -tunnel through the NeSI lander node to the SSH port on w-ndcv01. - -#### To connect to Māui - -``` sl -# The first port number (22222 in this example) can be anything you like > 1024, -# so long as it's not in use by another service. -# We have picked 22222 because it's easy to remember, the SSH port being 22. -ssh -L 22222:w-ndcv01.maui.niwa.co.nz:22 -o ExitOnForwardFailure=yes -N lander -``` - -If you don't already have another open connection to or through the -NeSI lander node, you will at this point be prompted for your -password and your second factor. Enter them in the usual manner. - -#### To connect to Mahuika - -1. Open an SSH tunnel through the lander node to the Mahuika login -node. - -``` sl -# The tunnel port numbers (10022 in this example) can be anything you like > 1024, -# so long as neither of them is in use by another service. -# We have picked 10022 because it's easy to remember, the SSH port being 22. -ssh -L 10022:login.mahuika.nesi.org.nz:22 -o ExitOnForwardFailure=yes -N lander -``` - -If you don't already have another open connection to or through -the NeSI lander node to the Mahuika login node, you will at this -point be prompted for your password and your second factor. -Enter them in the usual manner. - -2. In a new terminal, open an SSH tunnel through this existing -tunnel to Mahuika's NICE DCV node. - -``` sl -# The tunnel port numbers (22222 in this example) can be anything you like > 1024, -# so long as neither of them is in use by another service. -# We have picked 22222 because it's easy to remember, the SSH port being 22. -ssh -L 22222:vgpuwbg005:22 -o ExitOnForwardFailure=yes -N -p 10022 -l localhost -``` - -If prompted for a first factor, enter it in the usual manner. -The second factor is optional (you can just press Enter), but if -you provide a second factor it must be correct. + emulator (assuming you added the + [recommended](https://support.nesi.org.nz/hc/en-gb/articles/360000625535-Recommended-Terminal-Setup) + sections to your `~/.ssh/config` file). This command opens an SSH + tunnel through the NeSI lander node to the SSH port on w-ndcv01. + + ### To connect to Māui + + ``` sl + # The first port number (22222 in this example) can be anything you like > 1024, + # so long as it's not in use by another service. + # We have picked 22222 because it's easy to remember, the SSH port being 22. + ssh -L 22222:w-ndcv01.maui.niwa.co.nz:22 -o ExitOnForwardFailure=yes -N lander + ``` + + If you don't already have another open connection to or through the + NeSI lander node, you will at this point be prompted for your + password and your second factor. Enter them in the usual manner. + + ### To connect to Mahuika + + 1. Open an SSH tunnel through the lander node to the Mahuika login + node. + + ``` sl + # The tunnel port numbers (10022 in this example) can be anything you like > 1024, + # so long as neither of them is in use by another service. + # We have picked 10022 because it's easy to remember, the SSH port being 22. + ssh -L 10022:login.mahuika.nesi.org.nz:22 -o ExitOnForwardFailure=yes -N lander + ``` + + If you don't already have another open connection to or through + the NeSI lander node to the Mahuika login node, you will at this + point be prompted for your password and your second factor. + Enter them in the usual manner. + + 2. In a new terminal, open an SSH tunnel through this existing + tunnel to Mahuika's NICE DCV node. + + ``` sl + # The tunnel port numbers (22222 in this example) can be anything you like > 1024, + # so long as neither of them is in use by another service. + # We have picked 22222 because it's easy to remember, the SSH port being 22. + ssh -L 22222:vgpuwbg005:22 -o ExitOnForwardFailure=yes -N -p 10022 -l localhost + ``` + + If prompted for a first factor, enter it in the usual manner. + The second factor is optional (you can just press Enter), but if + you provide a second factor it must be correct. 2. Open a second terminal session, and run the following command in it. -``` sl -# The first port number (28443 in this example) can be anything you like > 1024, -# so long as it's not in use by another service. -# We have picked 28443 because it's easy to remember, the NICE DCV port being 8443. -ssh -L 28443:localhost:8443 -o ExitOnForwardFailure=yes -N -p 22222 -l localhost -``` + ``` sl + # The first port number (28443 in this example) can be anything you like > 1024, + # so long as it's not in use by another service. + # We have picked 28443 because it's easy to remember, the NICE DCV port being 8443. + ssh -L 28443:localhost:8443 -o ExitOnForwardFailure=yes -N -p 22222 -l localhost + ``` -You will probably be prompted for a first factor and an optional -second factor. + You will probably be prompted for a first factor and an optional + second factor. -Like the above command, this command will apparently hang if -successful. Do not interrupt it as it is necessary to hold the port -open for the server. + Like the above command, this command will apparently hang if + successful. Do not interrupt it as it is necessary to hold the port + open for the server. ### MobaXTerm on Windows @@ -176,7 +176,7 @@ connections to look like this: #### To connect to Māui -![2020-02-11\_NICE\_DCV\_tunnels\_in\_MobaXTerm.png](../../assets/images/NICE_DCV_Setup.png) +![2020-02-11\_NICE\_DCV\_tunnels\_in\_MobaXTerm.png](../../assets/images/NICE_DCV_Setup.png) When setting up and using the connections, note the following: #### To connect to Mahuika @@ -184,16 +184,16 @@ When setting up and using the connections, note the following: A picture is still to come. - The numbers of the forward ports (fourth column) are arbitrary so -long as you set them to be greater than 1024, but the SSH server -port for the second connection must be the same as the forward port -for the first connection. + long as you set them to be greater than 1024, but the SSH server + port for the second connection must be the same as the forward port + for the first connection. - The destination port for the first tunnel must be `22` and that for -the second tunnel must be `8443`. + the second tunnel must be `8443`. - The server port for the first tunnel must be `22`. - The tunnel through the lander node must be started before the tunnel -through localhost can be started. + through localhost can be started. - The destination server for the tunnel through the lander node must -be the NeSI login node where your NICE DCV server is running. + be the NeSI login node where your NICE DCV server is running. ## Connecting to a session @@ -212,8 +212,8 @@ To connect with the NICE DCV client software: 1. Launch the client on your laptop or desktop computer. 2. Enter the server and session name in the login screen using the -format `localhost:28443#`, or whatever port number you -used for the second SSH tunnel as an alternative to 28443. + format `localhost:28443#`, or whatever port number you + used for the second SSH tunnel as an alternative to 28443. 3. Click on "Connect". 4. Enter your NeSI Linux username and password. 5. Click on "Login". @@ -224,10 +224,10 @@ To connect with a browser: 1. Launch the browser or open a new tab 2. Enter "https://localhost:28443/#<session name>" in the URL -bar. If you used a port other than 28443 when creating the second -SSH tunnel, make the necessary modifications to this URL. + bar. If you used a port other than 28443 when creating the second + SSH tunnel, make the necessary modifications to this URL. 3. You may need to accept the insecure certificate in your browser -before you can proceed + before you can proceed 4. Enter your HPC account credentials (first factor) 5. Click on "Login" @@ -250,14 +250,14 @@ often as you like. ### Disconnecting from a session without stopping it 1. Click on the machine URL in the top-right corner of the NICE DCV -window + window 2. Select "Disconnect" 3. Close the NICE DCV client or browser window ### Disconnecting and stopping a session 1. Click on the application launcher icon in the top-left corner of the -virtual desktop + virtual desktop 2. Click on "Leave" 3. Click on "Log out" 4. Confirm the logout in the dialog box that appears diff --git a/docs/Scientific_Computing/HPC_Software_Environment/NVIDIA_GPU_Containers.md b/docs/Scientific_Computing/HPC_Software_Environment/NVIDIA_GPU_Containers.md index c11ebb496..385174a5e 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/NVIDIA_GPU_Containers.md +++ b/docs/Scientific_Computing/HPC_Software_Environment/NVIDIA_GPU_Containers.md @@ -41,53 +41,53 @@ running the NAMD image on NeSI, based on the NVIDIA instructions here: . 1. Download the APOA1 benchmark data: -- ``` sl -wget -O - https://gitlab.com/NVHPC/ngc-examples/raw/master/namd/3.0/get_apoa1.sh | bash -cd apoa1 -``` + - ``` sl + wget -O - https://gitlab.com/NVHPC/ngc-examples/raw/master/namd/3.0/get_apoa1.sh | bash + cd apoa1 + ``` 2. Load the Singularity module: -- ``` sl -module load Singularity -``` + - ``` sl + module load Singularity + ``` 3. Build the Singularity image. This step differs from the NVIDIA -instructions because instead of using "build" we "pull" the image -directly, which does not require root access: -- Please do refer  "[Build Environment -Variables](https://support.nesi.org.nz/hc/en-gb/articles/360001107916-Singularity#build_environment_variables)" -prior to running the following `pull` command + instructions because instead of using "build" we "pull" the image + directly, which does not require root access: + - Please do refer  "[Build Environment + Variables](https://support.nesi.org.nz/hc/en-gb/articles/360001107916-Singularity#build_environment_variables)" + prior to running the following `pull` command -- ``` sl -singularity pull namd-3.0-alpha9-singlenode.sif docker://nvcr.io/hpc/namd:3.0-alpha9-singlenode -``` + - ``` sl + singularity pull namd-3.0-alpha9-singlenode.sif docker://nvcr.io/hpc/namd:3.0-alpha9-singlenode + ``` 4. Copy the following into a Slurm script named *run.sl*: -- ``` sl -#!/bin/bash -e + - ``` sl + #!/bin/bash -e -#SBATCH --job-name=namdgpu -#SBATCH --time=00:10:00 -#SBATCH --ntasks=1 -#SBATCH --cpus-per-task=8 -#SBATCH --gpus-per-node P100:1 -#SBATCH --mem=1G + #SBATCH --job-name=namdgpu + #SBATCH --time=00:10:00 + #SBATCH --ntasks=1 + #SBATCH --cpus-per-task=8 + #SBATCH --gpus-per-node P100:1 + #SBATCH --mem=1G -module purge -module load Singularity + module purge + module load Singularity -# name of the NAMD input file, tag, etc -NAMD_INPUT="apoa1_nve_cuda.namd" -NAMD_SIF="namd-3.0-alpha9-singlenode.sif" -NAMD_EXE=namd3 + # name of the NAMD input file, tag, etc + NAMD_INPUT="apoa1_nve_cuda.namd" + NAMD_SIF="namd-3.0-alpha9-singlenode.sif" + NAMD_EXE=namd3 -# singularity command with required arguments -SINGULARITY="singularity exec --nv -B $(pwd):/host_pwd --pwd /host_pwd ${NAMD_SIF}" + # singularity command with required arguments + SINGULARITY="singularity exec --nv -B $(pwd):/host_pwd --pwd /host_pwd ${NAMD_SIF}" -# run NAMD -${SINGULARITY} ${NAMD_EXE} +ppn ${SLURM_CPUS_PER_TASK} +idlepoll ${NAMD_INPUT} -``` + # run NAMD + ${SINGULARITY} ${NAMD_EXE} +ppn ${SLURM_CPUS_PER_TASK} +idlepoll ${NAMD_INPUT} + ``` 5. Submit the job: -- ``` sl -sbatch run.sl -``` + - ``` sl + sbatch run.sl + ``` 6. View the standard output from the simulation in the Slurm .out file. -We expect similar steps to work for other NGC containers. \ No newline at end of file + We expect similar steps to work for other NGC containers. \ No newline at end of file diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Offloading_to_GPU_with_OpenACC.md b/docs/Scientific_Computing/HPC_Software_Environment/Offloading_to_GPU_with_OpenACC.md index e85d355a9..69a8440a0 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/Offloading_to_GPU_with_OpenACC.md +++ b/docs/Scientific_Computing/HPC_Software_Environment/Offloading_to_GPU_with_OpenACC.md @@ -31,7 +31,7 @@ want to run a GPU. We'll use OpenACC, which adds directives to your source code. The advantages of OpenACC over other approaches is that the source code changes are generally small and your code remains portable, i.e. it will run on both CPU and GPU. The main disadvantage of OpenACC -is that only a few compilers support it. +is that only a few compilers support it.  More information about OpenACC can be found [here](http://www.icl.utk.edu/~luszczek/teaching/courses/fall2016/cosc462/pdf/OpenACC_Fundamentals.pdf). @@ -46,13 +46,13 @@ written in Fortran): #include #include int main() { -double total = 0; -int i, n = 1000000000; + double total = 0; + int i, n = 1000000000; #pragma acc parallel loop copy(total) copyin(n) reduction(+:total) -for (i = 0; i < n; ++i) { -total += exp(sin(M_PI * (double) i/12345.6789)); -} -std::cout << "total is " << total << '\n'; + for (i = 0; i < n; ++i) { + total += exp(sin(M_PI * (double) i/12345.6789)); + } + std::cout << "total is " << total << '\n'; } ``` @@ -70,7 +70,7 @@ threads, the speedup can be significant. Also note that `total` is initialised on the CPU (above the pragma) and should be copied to the GPU and back to the CPU after completing the loop. (It is also possible to initialise this variable on the GPU.) Likewise the number of -iterations `n` should be copied from the CPU  to the GPU. +iterations `n` should be copied from the CPU  to the GPU.  ## Compile @@ -89,13 +89,13 @@ but first we need to load a few modules: ``` sl module load craype-broadwell -module load cray-libsci_acc -module load craype-accel-nvidia60 +module load cray-libsci_acc +module load craype-accel-nvidia60 module load PrgEnv-cray ``` (Ignore warning "cudatoolkit >= 8.0 is required"). Furthermore, you -may need to load `cuda/fft` or `cuda/blas` +may need to load `cuda/fft` or `cuda/blas` To compare the execution times between the CPU and GPU version, we build two executables: @@ -125,7 +125,7 @@ time srun --ntasks=1 --cpus-per-task=1 --gpus-per-node=P100:1 ./totalAccGpu | total | 7.6 | | totalAccGpu | 0.41 | - +  Check out [this page](https://support.nesi.org.nz/hc/en-gb/articles/360001127856-Offloading-to-GPU-with-OpenMP-) diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Offloading_to_GPU_with_OpenMP.md b/docs/Scientific_Computing/HPC_Software_Environment/Offloading_to_GPU_with_OpenMP.md index 997dfce11..6e1832d9b 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/Offloading_to_GPU_with_OpenMP.md +++ b/docs/Scientific_Computing/HPC_Software_Environment/Offloading_to_GPU_with_OpenMP.md @@ -32,14 +32,14 @@ operation involving a large loop: #include #include int main() { -int n = 1000000000; -double total = 0; + int n = 1000000000; + double total = 0; #pragma omp target teams distribute \ parallel for map(tofrom: total) map(to: n) reduction(+:total) -for (int i = 0; i < n; ++i) { -total += exp(sin(M_PI * (double) i/12345.6789)); -} -std::cout << "total is " << total << '\n'; + for (int i = 0; i < n; ++i) { + total += exp(sin(M_PI * (double) i/12345.6789)); + } + std::cout << "total is " << total << '\n'; } ``` @@ -53,7 +53,7 @@ map(to: n) reduction(+:total) ``` which moves variables `total` and `n` to the GPU and creates teams of -threads to perform the sum operation in parallel. +threads to perform the sum operation in parallel.  ## Compile @@ -62,7 +62,7 @@ need to load a few modules: ``` sl module load cray-libsci_acc/18.06.1 craype-accel-nvidia60 \ -PrgEnv-cray/1.0.4 cuda92/blas/9.2.88 cuda92/toolkit/9.2.88 + PrgEnv-cray/1.0.4 cuda92/blas/9.2.88 cuda92/toolkit/9.2.88 ``` (Ignore warning "cudatoolkit >= 8.0 is required"). diff --git a/docs/Scientific_Computing/HPC_Software_Environment/OpenMP_settings.md b/docs/Scientific_Computing/HPC_Software_Environment/OpenMP_settings.md index 9cad902dd..f6ca8db19 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/OpenMP_settings.md +++ b/docs/Scientific_Computing/HPC_Software_Environment/OpenMP_settings.md @@ -76,7 +76,7 @@ generally advisable to pin the threads to avoid delays caused by thread migration. 3\. OMP\_PLACES. Set this to "cores" if you want to pin the threads to -physical cores, or to "threads" if you want to use hyperthreading. +physical cores, or to "threads" if you want to use hyperthreading.  The effect of each setting is illustrated below. In this experiment we measured the execution time twice of the finite difference @@ -137,7 +137,7 @@ unset In the default case, --hint was not used and the environment variables OMP\_PROC\_BIND and OMP\_PLACES were not set. Significant variations of execution times are sometimes observed due to the random placement of -threads, which may or may not share a physical core. +threads, which may or may not share a physical core.  The third column shows the settings for the case with no multithreading. The fourth column places 2 threads per physical cores (i.e. diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Per_job_temporary_directories.md b/docs/Scientific_Computing/HPC_Software_Environment/Per_job_temporary_directories.md index 51e488716..8c62ce407 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/Per_job_temporary_directories.md +++ b/docs/Scientific_Computing/HPC_Software_Environment/Per_job_temporary_directories.md @@ -51,7 +51,7 @@ yourself is shown below, `export TMPDIR=/nesi/nobackup/$SLURM_ACCOUNT/tmp/$SLURM_JOB_ID` - +  ## Example of copying data into the per job temporary directories for use mid-job diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Programming_environment_differences_between_Maui_and_Mahuika.md b/docs/Scientific_Computing/HPC_Software_Environment/Programming_environment_differences_between_Maui_and_Mahuika.md index e28db4c61..4fc56aecc 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/Programming_environment_differences_between_Maui_and_Mahuika.md +++ b/docs/Scientific_Computing/HPC_Software_Environment/Programming_environment_differences_between_Maui_and_Mahuika.md @@ -33,7 +33,7 @@ Mahuika Ancillary Nodes, and Māui Ancillary nodes) systems. Table 1: The Cray Programming Environment on Māui and Mahuika. Black text indicates components common to both systems, green to components only available on Mahuika, and blue to components only available on Māui -XC part. +XC part.
@@ -128,21 +128,21 @@ Collector

**Notes:** 1. 1Only available on Mahuika HPC Cluster, Mahuika Ancillary -Nodes and Māui Ancillary nodes + Nodes and Māui Ancillary nodes 2. 2Only available on Māui Supercomputer. 3. On Māui (XC50) the Modules framework is used to simplify access to -the various compiler suites and libraries. To access a particular -compiler suite, you simply load (or switch to) the appropriate -programming environment module using the command PrgEnv-X (where X -is one of gnu, intel, or cray). This facility is not available on -the Mahuika HPC Cluster, Mahuika Ancillary Nodes and Māui Ancillary -nodes. + the various compiler suites and libraries. To access a particular + compiler suite, you simply load (or switch to) the appropriate + programming environment module using the command PrgEnv-X (where X + is one of gnu, intel, or cray). This facility is not available on + the Mahuika HPC Cluster, Mahuika Ancillary Nodes and Māui Ancillary + nodes. 4. [Intel Parallel Studio XE Cluster -Edition](https://software.intel.com/en-us/node/685016) for Linux -will be installed on the Mahuika HPC Cluster, Mahuika Ancillary -Nodes and Māui Ancillary nodes. + Edition](https://software.intel.com/en-us/node/685016) for Linux + will be installed on the Mahuika HPC Cluster, Mahuika Ancillary + Nodes and Māui Ancillary nodes. 5. Intel Parallel Studio XE Professional Edition for CLE will be -installed installed on Māui. + installed installed on Māui. ## Key Similarities  between CPE on XC50 and CS400/500s @@ -161,25 +161,25 @@ least similar), but also some important differences that affect how a user interacts with the system when building an application code: - The XC platform uses compiler drivers (“ftn”, “cc”, “CC”), users -should not use compilers directly. The CS platforms have compiler -drivers only for Cray compiler. For GNU and Intel compilers, users -run “gfortran”, “ifort”, “gcc”, “icc” etc.; + should not use compilers directly. The CS platforms have compiler + drivers only for Cray compiler. For GNU and Intel compilers, users + run “gfortran”, “ifort”, “gcc”, “icc” etc.; - On the XC platform, a compiler is chosen by switching to its -corresponding “PrgEnv-xxx” module. This will also switch -automatically the version of the loaded Cray provided libraries, -e.g., the cray-netcdf and cray-fftw library modules – no equivalent -is available on the CS platforms; On the CS platforms the main -software stack is based on Easybuild toolchains. The default one is -“gimkl”, including GCC, Intel MPI, and Intel MKL. + corresponding “PrgEnv-xxx” module. This will also switch + automatically the version of the loaded Cray provided libraries, + e.g., the cray-netcdf and cray-fftw library modules – no equivalent + is available on the CS platforms; On the CS platforms the main + software stack is based on Easybuild toolchains. The default one is + “gimkl”, including GCC, Intel MPI, and Intel MKL. - The XC platform requires everyone to use Cray-MPI, but on the CS -platform, users can choose to use various MPI libraries; + platform, users can choose to use various MPI libraries; - Getting rid of all modules via “module purge” renders an XC session -unusable (a list of ~20 modules are necessary to guarantee -operation). On CS there are only few modules necessary, the main one -is called “NeSI”, providing the NeSI software stack and slurm -module; + unusable (a list of ~20 modules are necessary to guarantee + operation). On CS there are only few modules necessary, the main one + is called “NeSI”, providing the NeSI software stack and slurm + module; - The XC platform defaults to static linking, the CS platform to -dynamic linking; + dynamic linking; In summary, compilers, as well as various tools and libraries are common across both platforms. However, there are important differences in how @@ -190,11 +190,12 @@ nodes)](https://nesi.github.io/hpc_training/lessons/maui-and-mahuika/building-co and [ Māui XC50](https://nesi.github.io/hpc_training/lessons/maui-and-mahuika/building-code-maui). +  +  +  +  - - - - +  \ No newline at end of file diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Thread_Placement_and_Thread_Affinity.md b/docs/Scientific_Computing/HPC_Software_Environment/Thread_Placement_and_Thread_Affinity.md index 383803c74..82dbe491a 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/Thread_Placement_and_Thread_Affinity.md +++ b/docs/Scientific_Computing/HPC_Software_Environment/Thread_Placement_and_Thread_Affinity.md @@ -68,20 +68,20 @@ before). It is very important to note the following: - Each socket only has access to its own RAM - it will need to ask the -processor in the other socket if it wants to access that RAM space, -and that takes longer (this is called a -[NUMA](https://en.wikipedia.org/wiki/Non-uniform_memory_access) -architecture) + processor in the other socket if it wants to access that RAM space, + and that takes longer (this is called a + [NUMA](https://en.wikipedia.org/wiki/Non-uniform_memory_access) + architecture) - Each socket has a fast cache that is shared between all cores in -that socket + that socket - Each core has its own private fast cache as well For a thread that runs on a given core, this means: - Data is "local" when it is stored in RAM or cache close to that core -and can be accessed very quickly + and can be accessed very quickly - Data is "remote" when it is stored elsewhere and takes extra time to -access + access ## Thread Placement and Affinity @@ -125,9 +125,9 @@ called "hello\_world.c": #include int main() { -#pragma omp parallel -printf("Hello World from Thread %i!\n", omp_get_thread_num()); -return 0; + #pragma omp parallel + printf("Hello World from Thread %i!\n", omp_get_thread_num()); + return 0; } ``` @@ -191,12 +191,12 @@ Hello World from Thread 2! The runtime library tells us that: - Slurm provided 3 physical cores with only 1 logical core ("thread") -per physical core - no hyperthreading + per physical core - no hyperthreading - We got the cores with IDs 0, 6, 8 in this particular example - these -happen to be on the same socket, but that is not guaranteed! + happen to be on the same socket, but that is not guaranteed! - All our threads are "bound" to all 3 cores at once - this means that -no affinity setup has been made, and the threads are free to move -from one core to another + no affinity setup has been made, and the threads are free to move + from one core to another Setting "--hint=multithread" instead to activate hyperthreading should result in output similar to this: @@ -217,12 +217,12 @@ Hello World from Thread 2! ``` - Slurm provided 2 physical cores with 2 logical cores ("threads") -each and 3 logical cores in total (we don't get the remaining -logical core on the second physical core, even though that logical -core will not be given to other jobs) + each and 3 logical cores in total (we don't get the remaining + logical core on the second physical core, even though that logical + core will not be given to other jobs) - Notice that we now get logical core IDs 6, 8, 46 - IDs 6 and 46 are -the first and second logical core inside the first physical core, -while ID 8 is a logical core in the second physical core + the first and second logical core inside the first physical core, + while ID 8 is a logical core in the second physical core ## Setting up thread placement and affinity @@ -238,15 +238,15 @@ optimising threading setup. Let us start with the following setup: - Run with "--hint=multithread" so that our program can access all -available logical cores + available logical cores - Bind threads to physical cores ("granularity=core") - they are still -free to move between the two logical cores inside a given physical -core + free to move between the two logical cores inside a given physical + core - Place threads close together ("compact") - although this has little -significance here as we use all available cores anyway, we still -need to specify this to activate thread affinity + significance here as we use all available cores anyway, we still + need to specify this to activate thread affinity - Bind thread IDs to logical core IDs in simple numerical order by -setting permute and offset specifiers to 0 + setting permute and offset specifiers to 0 ``` sl #!/bin/bash -e @@ -347,10 +347,10 @@ placement and affinity, it depends on the application. Also keep in mind that: - Job runtimes can be affected by other jobs that are running on the -same node and share network access, memory bus, and some caches on -the same socket + same node and share network access, memory bus, and some caches on + the same socket - The operating system on a node will also still need to run its own -processes and threads + processes and threads This can lead to a trade-off between restricting thread movement for better performance while allowing some flexibility for threads that are diff --git a/docs/Scientific_Computing/HPC_Software_Environment/Visualisation_software.md b/docs/Scientific_Computing/HPC_Software_Environment/Visualisation_software.md index 9699854ff..722ba21fd 100644 --- a/docs/Scientific_Computing/HPC_Software_Environment/Visualisation_software.md +++ b/docs/Scientific_Computing/HPC_Software_Environment/Visualisation_software.md @@ -129,7 +129,7 @@ mostly used in the weather and climate fields. | NCL/6.4.0-GCC-7.1.0 |   |  ✔ | NCL base package | | NCL/6.6.2-intel-2018b |  ✔ |   | NCL base package | - +  ### MATLAB @@ -257,20 +257,20 @@ If you want to use ParaView in client-server mode, use the following setup: - Load one of the ParaView Server modules listed above and launch the -server in your interactive visualisation session on the HPC: + server in your interactive visualisation session on the HPC: ``` sl mpiexec -np pvserver ``` - Create an SSH tunnel for port "11111" from the HPC to your local -machine using, e.g., the ssh program (Linux and MacOS) or MobaXterm -(Windows) + machine using, e.g., the ssh program (Linux and MacOS) or MobaXterm + (Windows) - Launch the ParaView GUI on your local machine and go to "File > -Connect" + Connect" - Click on "Add Server", choose server type "Client / Server", host -"localhost" (as we will be using the SSH tunnel), and port "11111", -then click on "Configure" + "localhost" (as we will be using the SSH tunnel), and port "11111", + then click on "Configure" - Select the new server and click on "Connect" ### VisIt @@ -303,3 +303,4 @@ language. | VTK/7.1.1-gimkl-2018b-Python-2.7.16 | ✔ |   | VTK7 with Python bindings | | VTK/8.1.1-GCC-7.1.0-Anaconda2-5.2.0 |   | ✔ | VTK8 with Python bindings | +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Interactive_computing_using_Jupyter/Jupyter_kernels_Manual_management.md b/docs/Scientific_Computing/Interactive_computing_using_Jupyter/Jupyter_kernels_Manual_management.md index e1061cbab..31a54682d 100644 --- a/docs/Scientific_Computing/Interactive_computing_using_Jupyter/Jupyter_kernels_Manual_management.md +++ b/docs/Scientific_Computing/Interactive_computing_using_Jupyter/Jupyter_kernels_Manual_management.md @@ -39,20 +39,20 @@ pages. ## Adding a custom Python kernel !!! prerequisite See also -See the [Jupyter kernels - Tool-assisted -management](https://support.nesi.org.nz/hc/en-gb/articles/4414958674831) -page for the **preferred** way to register kernels, which uses the -`nesi-add-kernel` command line tool to automate most of these manual -steps. + See the [Jupyter kernels - Tool-assisted + management](https://support.nesi.org.nz/hc/en-gb/articles/4414958674831) + page for the **preferred** way to register kernels, which uses the + `nesi-add-kernel` command line tool to automate most of these manual + steps. You can configure custom Python kernels for running your Jupyter notebooks. This could be necessary and/or recommended in some situations, including: - if you wish to load a different combination of environment modules -than those we load in our default kernels + than those we load in our default kernels - if you would like to activate a virtual environment or conda -environment before launching the kernel + environment before launching the kernel The following example will create a custom kernel based on the Miniconda3 environment module (but applies to other environment modules @@ -106,7 +106,7 @@ module purge module load Miniconda3/4.8.2 # activate conda environment -source $(conda info --base)/etc/profile.d/conda.sh +source $(conda info --base)/etc/profile.d/conda.sh conda deactivate # workaround for https://github.com/conda/conda/issues/9392 conda activate my-conda-env @@ -126,15 +126,15 @@ like this (change <username> to your NeSI username): ``` sl { -"argv": [ -"/home//.local/share/jupyter/kernels/my-conda-env/wrapper.sh", -"-m", -"ipykernel_launcher", -"-f", -"{connection_file}" -], -"display_name": "My Conda Env", -"language": "python" + "argv": [ + "/home//.local/share/jupyter/kernels/my-conda-env/wrapper.sh", + "-m", + "ipykernel_launcher", + "-f", + "{connection_file}" + ], + "display_name": "My Conda Env", + "language": "python" } ``` @@ -205,15 +205,15 @@ look like this (change <project\_code> to your NeSI project code): ``` sl { -"argv": [ -"/nesi/project//.jupyter/share/jupyter/kernels/shared-ete-env/wrapper.sh", -"-m", -"ipykernel_launcher", -"-f", -"{connection_file}" -], -"display_name": "Shared Conda Env", -"language": "python" + "argv": [ + "/nesi/project//.jupyter/share/jupyter/kernels/shared-ete-env/wrapper.sh", + "-m", + "ipykernel_launcher", + "-f", + "{connection_file}" + ], + "display_name": "Shared Conda Env", + "language": "python" } ``` @@ -290,16 +290,16 @@ something like this (change <username> to your NeSI username): ``` sl { -"argv": [ -"/home//.local/share/jupyter/kernels/myrwithmpfr/wrapper.sh", -"--slave", -"-e", -"IRkernel::main()", -"--args", -"{connection_file}" -], -"display_name": "R with MPFR", -"language": "R" + "argv": [ + "/home//.local/share/jupyter/kernels/myrwithmpfr/wrapper.sh", + "--slave", + "-e", + "IRkernel::main()", + "--args", + "{connection_file}" + ], + "display_name": "R with MPFR", + "language": "R" } ``` diff --git a/docs/Scientific_Computing/Interactive_computing_using_Jupyter/Jupyter_kernels_Tool_assisted_management.md b/docs/Scientific_Computing/Interactive_computing_using_Jupyter/Jupyter_kernels_Tool_assisted_management.md index 453b5e3db..9eec753b4 100644 --- a/docs/Scientific_Computing/Interactive_computing_using_Jupyter/Jupyter_kernels_Tool_assisted_management.md +++ b/docs/Scientific_Computing/Interactive_computing_using_Jupyter/Jupyter_kernels_Tool_assisted_management.md @@ -61,7 +61,7 @@ nesi-add-kernel tf_kernel TensorFlow/2.8.2-gimkl-2022a-Python-3.10.5 and to share the kernel with other members of your NeSI project: ``` sl -nesi-add-kernel --shared tf_kernel_shared TensorFlow/2.8.2-gimkl-2022a-Python-3.10.5 +nesi-add-kernel --shared tf_kernel_shared TensorFlow/2.8.2-gimkl-2022a-Python-3.10.5 ``` To list all the installed kernels, use the following command: diff --git a/docs/Scientific_Computing/Interactive_computing_using_Jupyter/Jupyter_on_NeSI.md b/docs/Scientific_Computing/Interactive_computing_using_Jupyter/Jupyter_on_NeSI.md index 8fc1eaed0..0aa3911b7 100644 --- a/docs/Scientific_Computing/Interactive_computing_using_Jupyter/Jupyter_on_NeSI.md +++ b/docs/Scientific_Computing/Interactive_computing_using_Jupyter/Jupyter_on_NeSI.md @@ -25,10 +25,10 @@ zendesk_section_id: 360001189255 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Note -This service is available for users with a current allocation on -Mahuika only. -[Please contact us to request a suitable -allocation.](https://support.nesi.org.nz/hc/en-gb/requests/new) + This service is available for users with a current allocation on + Mahuika only. + [Please contact us to request a suitable + allocation.](https://support.nesi.org.nz/hc/en-gb/requests/new) ## Introduction @@ -42,20 +42,20 @@ learning, numerical simulation, managing [Slurm job submissions](https://support.nesi.org.nz/hc/en-gb/articles/360000684396) and workflows and much more. !!! prerequisite See also -- See the [RStudio via Jupyter on -NeSI](https://support.nesi.org.nz/hc/en-gb/articles/360004337836) -page for launching an RStudio instance. -- See the [MATLAB via Jupyter on -NeSI](https://support.nesi.org.nz/hc/en-gb/articles/4614893064591) -page for launching MATLAB via Jupyter -- See the [Virtual Desktop via Jupyter on -NeSI](https://support.nesi.org.nz/hc/en-gb/articles/360001600235) -page for launching a virtual desktop via Jupyter. -- See the [Jupyter kernels - Tool-assisted -management](https://support.nesi.org.nz/hc/en-gb/articles/4414958674831) -(recommended) and [Jupyter kernels - Manual -management](https://support.nesi.org.nz/hc/en-gb/articles/4414951820559) -pages for adding kernels. + - See the [RStudio via Jupyter on + NeSI](https://support.nesi.org.nz/hc/en-gb/articles/360004337836) + page for launching an RStudio instance. + - See the [MATLAB via Jupyter on + NeSI](https://support.nesi.org.nz/hc/en-gb/articles/4614893064591) + page for launching MATLAB via Jupyter + - See the [Virtual Desktop via Jupyter on + NeSI](https://support.nesi.org.nz/hc/en-gb/articles/360001600235) + page for launching a virtual desktop via Jupyter. + - See the [Jupyter kernels - Tool-assisted + management](https://support.nesi.org.nz/hc/en-gb/articles/4414958674831) + (recommended) and [Jupyter kernels - Manual + management](https://support.nesi.org.nz/hc/en-gb/articles/4414951820559) + pages for adding kernels. ## Accessing Jupyter on NeSI @@ -76,25 +76,25 @@ be up and running within one to two minutes. Requesting a GPU can increase this time significantly as there are only a small number of GPUs available at NeSI. !!! prerequisite Tip -If your server appears to not have started within 3 minutes please -reload the browser window and check again, otherwise contact -[support@nesi.org.nz](mailto:support@nesi.org.nz?subject=Jupyter%20on%20NeSI). + If your server appears to not have started within 3 minutes please + reload the browser window and check again, otherwise contact + [support@nesi.org.nz](mailto:support@nesi.org.nz?subject=Jupyter%20on%20NeSI). ## Known issues - When using *srun* in a Jupyter terminal you may see messages like -those shown below. The "error" messages are actually just warnings -and can be ignored; the *srun* command should still work. -Alternatively, you could run *unset TMPDIR* in the terminal before -running *srun* to avoid these warnings. - -``` sl -$ srun --pty bash -srun: job 28560743 queued and waiting for resources -srun: job 28560743 has been allocated resources -slurmstepd: error: Unable to create TMPDIR [/dev/shm/jobs/28560712]: Permission denied -slurmstepd: error: Setting TMPDIR to /tmp -``` + those shown below. The "error" messages are actually just warnings + and can be ignored; the *srun* command should still work. + Alternatively, you could run *unset TMPDIR* in the terminal before + running *srun* to avoid these warnings. + + ``` sl + $ srun --pty bash + srun: job 28560743 queued and waiting for resources + srun: job 28560743 has been allocated resources + slurmstepd: error: Unable to create TMPDIR [/dev/shm/jobs/28560712]: Permission denied + slurmstepd: error: Setting TMPDIR to /tmp + ``` ## Jupyter user interface @@ -125,9 +125,9 @@ gaining command line access to NeSI systems instead of using an SSH client. Some things to note are: - when you launch the terminal application some environment modules -are already loaded, so you may want to run `module purge` + are already loaded, so you may want to run `module purge`  - processes launched directly in the JupyterLab terminal will probably -be killed when you Jupyter session times out + be killed when you Jupyter session times out ## Ending your interactive session and logging out @@ -161,7 +161,7 @@ about JupyterLab extensions can be found Check the extension's documentation to find out the supported installation method for that particular extension. -### Installing prebuilt extensions +### Installing prebuilt extensions  If the extension is packaged as a prebuilt extension (e.g. as a pip package), then you can install it from the JupyterLab terminal by @@ -204,13 +204,13 @@ These changes will only take effect after relaunching your Jupyter server and then you should be able to install JupyterLab extensions as you please. !!! prerequisite Note -The above commands will put the JupyterLab application directory in -your home directory. The application directory often requires at least -1-2GB of disk space and 30,000 inodes (file count), so make sure you -have space available in your home directory first (see [NeSI File -Systems and -Quotas](https://support.nesi.org.nz/hc/en-gb/articles/360000177256-NeSI-File-Systems-and-Quotas)) -or request a larger quota. + The above commands will put the JupyterLab application directory in + your home directory. The application directory often requires at least + 1-2GB of disk space and 30,000 inodes (file count), so make sure you + have space available in your home directory first (see [NeSI File + Systems and + Quotas](https://support.nesi.org.nz/hc/en-gb/articles/360000177256-NeSI-File-Systems-and-Quotas)) + or request a larger quota. You could change the path to point to a location in your project directory, especially if multiple people on your project will share the diff --git a/docs/Scientific_Computing/Interactive_computing_using_Jupyter/MATLAB_via_Jupyter_on_NeSI.md b/docs/Scientific_Computing/Interactive_computing_using_Jupyter/MATLAB_via_Jupyter_on_NeSI.md index d44948631..9b9b1f891 100644 --- a/docs/Scientific_Computing/Interactive_computing_using_Jupyter/MATLAB_via_Jupyter_on_NeSI.md +++ b/docs/Scientific_Computing/Interactive_computing_using_Jupyter/MATLAB_via_Jupyter_on_NeSI.md @@ -20,12 +20,12 @@ zendesk_section_id: 360001189255 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Note -This functionality is experimental and developing, which may introduce -breaking changes in the future. -If you would like to report a bug or propose a change see the GitHub -repo -[https://github.com/nesi/jupyter-matlab-proxy](https://github.com/nesi/jupyter-matlab-proxy?organization=nesi&organization=nesi) -or contact NeSI support at . + This functionality is experimental and developing, which may introduce + breaking changes in the future. + If you would like to report a bug or propose a change see the GitHub + repo + [https://github.com/nesi/jupyter-matlab-proxy](https://github.com/nesi/jupyter-matlab-proxy?organization=nesi&organization=nesi) + or contact NeSI support at . ## Getting started @@ -43,13 +43,13 @@ where you will see the following status information page. ## ![image\_\_1\_.png](../../assets/images/MATLAB_via_Jupyter_on_NeSI_0.png) MATLAB may take a few minutes to load, once it does you will be put -straight into the MATLAB environment. +straight into the MATLAB environment.  You can open the status page at any time by clicking the [![](../../assets/images/MATLAB_via_Jupyter_on_NeSI_1.png)](https://github.com/mathworks/jupyter-matlab-proxy/raw/main/img/tools_icon.png) button. !!! prerequisite Note -Your license must be valid for MATLAB 2021b or newer. + Your license must be valid for MATLAB 2021b or newer. ## Licensing @@ -77,5 +77,6 @@ not work as intended. For more details see [MATLAB#known\_bugs](https://support.nesi.org.nz/hc/en-gb/articles/212639047#known_bugs). +  - +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Interactive_computing_using_Jupyter/RStudio_via_Jupyter_on_NeSI.md b/docs/Scientific_Computing/Interactive_computing_using_Jupyter/RStudio_via_Jupyter_on_NeSI.md index 4c194b9bb..a695db2f6 100644 --- a/docs/Scientific_Computing/Interactive_computing_using_Jupyter/RStudio_via_Jupyter_on_NeSI.md +++ b/docs/Scientific_Computing/Interactive_computing_using_Jupyter/RStudio_via_Jupyter_on_NeSI.md @@ -20,12 +20,12 @@ zendesk_section_id: 360001189255 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Note -This functionality is experimental and may introduce breaking changes -in the future. These notes should be read in conjunction with NeSI's -main [R support -page](https://support.nesi.org.nz/hc/en-gb/articles/209338087-R) -Your feedback is welcome, please don't hesitate to contact us at - to make suggestions. + This functionality is experimental and may introduce breaking changes + in the future. These notes should be read in conjunction with NeSI's + main [R support + page](https://support.nesi.org.nz/hc/en-gb/articles/209338087-R) + Your feedback is welcome, please don't hesitate to contact us at + to make suggestions. ## Getting started @@ -68,8 +68,8 @@ correct Library Paths are available. For R/4.2.1 the command `.libPaths()` will return the following: ``` sl -.libPaths() -[1] "/home/YOUR_USER_NAME/R/gimkl-2022a/4.2" + .libPaths() +[1] "/home/YOUR_USER_NAME/R/gimkl-2022a/4.2" [2] "/opt/nesi/CS400_centos7_bdw/R/4.2.1-gimkl-2022a/lib64/R/library" ``` @@ -91,7 +91,7 @@ name, and is emptied with each new session. So will not fill up your home directory. ``` sl -tempdir() + tempdir() [1] "/nesi/nobackup//rstudio_tmp/Rtmpjp2rm8" ``` @@ -154,3 +154,4 @@ print the password: $ cat ~/.config/rstudio_on_nesi/server_password ``` +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Interactive_computing_using_Jupyter/Virtual_Desktop_via_Jupyter_on_NeSI.md b/docs/Scientific_Computing/Interactive_computing_using_Jupyter/Virtual_Desktop_via_Jupyter_on_NeSI.md index 2289b22cc..62220a084 100644 --- a/docs/Scientific_Computing/Interactive_computing_using_Jupyter/Virtual_Desktop_via_Jupyter_on_NeSI.md +++ b/docs/Scientific_Computing/Interactive_computing_using_Jupyter/Virtual_Desktop_via_Jupyter_on_NeSI.md @@ -40,7 +40,7 @@ as long as your Jupyter session. ## Customisation -Most of the customisation of the desktop can be done from within, +Most of the customisation of the desktop can be done from within, panels, desktop, software preferences. ### `pre.bash` @@ -64,7 +64,7 @@ module load ANSYS/2021R2 # Any modules you want to be loaded in main instance go Environment set in `runscript_wrapper.bash` can be changed by creating a file `$XDG_CONFIG_HOME/vdt/post.bash` -Things you may wish to set here are: +Things you may wish to set here are: `VDT_WEBSOCKOPTS`, `VDT_VNCOPTS`, any changes to the wm environment, any changes to path, this include module files. @@ -84,67 +84,67 @@ You can build your own container bootstrapping off - +  *You can help contribute to this project [here](https://github.com/nesi/nesi-virtual-desktops/projects/1).* \ No newline at end of file diff --git a/docs/Scientific_Computing/Manuals_and_User_Guides/Manuals.md b/docs/Scientific_Computing/Manuals_and_User_Guides/Manuals.md index 649f65cc1..ff13ea75c 100644 --- a/docs/Scientific_Computing/Manuals_and_User_Guides/Manuals.md +++ b/docs/Scientific_Computing/Manuals_and_User_Guides/Manuals.md @@ -23,30 +23,31 @@ The following links will provide access to reference manuals and other guides. - Cray maintains a comprehensive technical documentation library -accessible [here](https://pubs.cray.com/), providing access to -Language Reference manuals, User guides, Performance analysis tools -and Cray Applications. + accessible [here](https://pubs.cray.com/), providing access to + Language Reference manuals, User guides, Performance analysis tools + and Cray Applications. - [Cray Fortran -v8.7](https://pubs.cray.com/content/S-3901/8.7/cray-fortran-reference-manual/fortran-compiler-introduction), [Cray -C and C++ -v8.7](https://pubs.cray.com/content/S-2179/8.7/cray-c-and-c++-reference-manual/invoke-the-c-and-c++-compilers) + v8.7](https://pubs.cray.com/content/S-3901/8.7/cray-fortran-reference-manual/fortran-compiler-introduction), [Cray + C and C++ + v8.7](https://pubs.cray.com/content/S-2179/8.7/cray-c-and-c++-reference-manual/invoke-the-c-and-c++-compilers) - Intel -[C/C++](https://software.intel.com/en-us/c-compilers/ipsxe-support/documentation) -and -[Fortran](https://software.intel.com/en-us/fortran-compilers-support/documentation), [Intel -Parallel Studio XE Cluster -Edition](https://software.intel.com/en-us/node/685016), [Intel -Developer -Guides](https://software.intel.com/en-us/documentation/view-all?search_api_views_fulltext=¤t_page=0&value=78151,83039;20813,80605,79893,20812,20902;20816;20802;20804) + [C/C++](https://software.intel.com/en-us/c-compilers/ipsxe-support/documentation) + and + [Fortran](https://software.intel.com/en-us/fortran-compilers-support/documentation), [Intel + Parallel Studio XE Cluster + Edition](https://software.intel.com/en-us/node/685016), [Intel + Developer + Guides](https://software.intel.com/en-us/documentation/view-all?search_api_views_fulltext=¤t_page=0&value=78151,83039;20813,80605,79893,20812,20902;20816;20802;20804) - [Allinea -Forge](http://content.allinea.com/downloads/userguide-forge.pdf) -(includes DDT and MAP, now called Arm Forge) + Forge](http://content.allinea.com/downloads/userguide-forge.pdf) + (includes DDT and MAP, now called Arm Forge) - [Nvidia Documentation](https://docs.nvidia.com/cuda/) - [cuda-gdb](https://docs.nvidia.com/cuda/cuda-gdb/) debugger - [cuda-memcheck](https://docs.nvidia.com/cuda/cuda-memcheck/) memory -checker + checker  - [GCC Manuals](https://gcc.gnu.org/onlinedocs/) See also [NeSI Application Support](https://support.nesi.org.nz/hc/en-gb/articles/360000170355) +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Manuals_and_User_Guides/Troubleshooting_on_NeSI.md b/docs/Scientific_Computing/Manuals_and_User_Guides/Troubleshooting_on_NeSI.md index 90eba2974..b6c0e121a 100644 --- a/docs/Scientific_Computing/Manuals_and_User_Guides/Troubleshooting_on_NeSI.md +++ b/docs/Scientific_Computing/Manuals_and_User_Guides/Troubleshooting_on_NeSI.md @@ -21,13 +21,14 @@ zendesk_section_id: 360000040036 [//]: <> (^^^^^^^^^^^^^^^^^^^^) [//]: <> (REMOVE ME IF PAGE VALIDATED) - +  - + 

+  \ No newline at end of file diff --git a/docs/Scientific_Computing/Manuals_and_User_Guides/XC50_Aries_Network_Architecture.md b/docs/Scientific_Computing/Manuals_and_User_Guides/XC50_Aries_Network_Architecture.md index ce097771e..0b5acfd04 100644 --- a/docs/Scientific_Computing/Manuals_and_User_Guides/XC50_Aries_Network_Architecture.md +++ b/docs/Scientific_Computing/Manuals_and_User_Guides/XC50_Aries_Network_Architecture.md @@ -30,20 +30,20 @@ XC50 cabinets are an Electrical "group". Māui has 1.5 groups. The performance characteristics are: 1. 1. Intra-Chassis -1. Backplane -2. 15 links in the backplane -3. Rank 1 (green) Network -4. 14 Gbps -2. Intra-group -1. Copper cables -2. 15 links in 5 connectors -3. Rank 2 (black) Network -4. 14 Gbps -3. Inter-group links -1. Optical -2. 10 links in 5 connectors -3. Rank 3 (blue) Network -4. 12.5 Gbps + 1. Backplane + 2. 15 links in the backplane + 3. Rank 1 (green) Network + 4. 14 Gbps + 2. Intra-group + 1. Copper cables + 2. 15 links in 5 connectors + 3. Rank 2 (black) Network + 4. 14 Gbps + 3. Inter-group links + 1. Optical + 2. 10 links in 5 connectors + 3. Rank 3 (blue) Network + 4. 12.5 Gbps The centrepiece of the Aries network is dynamic routing through a large variety of different routes from Aries A to Aries B. Therewith the diff --git a/docs/Scientific_Computing/Profiling_and_Debugging/Debugging.md b/docs/Scientific_Computing/Profiling_and_Debugging/Debugging.md index 9d0843019..e76cc886f 100644 --- a/docs/Scientific_Computing/Profiling_and_Debugging/Debugging.md +++ b/docs/Scientific_Computing/Profiling_and_Debugging/Debugging.md @@ -30,10 +30,10 @@ compile option), otherwise only nameless memory address will be provided. - [Analysing core files with -gdb](#h_cf410d73-e14d-4abf-897a-374c965aa9dc) + gdb](#h_cf410d73-e14d-4abf-897a-374c965aa9dc) - [ARM DDT](#h_c3a74e40-cb68-4f35-b81e-ebf496c587eb) - [Abnormal Termination Processing -(Māui)](#h_214a9eb8-a227-421d-a4c2-57f0309a61ec) + (Māui)](#h_214a9eb8-a227-421d-a4c2-57f0309a61ec)  ## Tracing job scripts @@ -53,7 +53,7 @@ command for ONE core file: ``` sl gdb -c core.12345 /path/to/bin/exe ... -bt + bt ``` This assumes that the crashing job used the executable @@ -80,7 +80,7 @@ work properly DDT needs to have debug symbols provided by the application binary (compiled with e.g. \`-g\`  option). DDT can be used using the module \`forge\`. There are basically 2 ways to use the debugger, interactive using the GUI and on the command line (bash -script) using the so called "offline" mode. +script) using the so called "offline" mode.  ### DDT offline mode @@ -89,7 +89,7 @@ scripts without a GUI. Which is useful especially if you have long lasting jobs to debug or long queuing times. To use this so called "offline mode" you just need to add \`ddt --offline\` in front of the srun statement. You can add more arguments for example to print the -values of variables. +values of variables.  ``` sl ddt --offline --break-at=fail.c:14 --evaluate="k;n" srun -n 4 @@ -139,7 +139,7 @@ there, e.g. hyperthreading options, accounts and qos. In the Environment Variables section you can load necessary modules. After submitting the task, DDT launches the application (wait for the -workload manager if necessary) and opens the following window. +workload manager if necessary) and opens the following window.  ![DDT\_overview.PNG](../../assets/images/Debugging_1.PNG) @@ -150,7 +150,7 @@ opportunity to set break/watch points, and define the type execution detailed information see the [DDT manual](https://developer.arm.com/docs/101136/latest/ddt) - +  ## ATP (Cray Abnormal Termination Processing) @@ -159,7 +159,7 @@ manual](https://developer.arm.com/docs/101136/latest/ddt) Abnormal Termination Processing (ATP) is a system that monitors Cray XC System (Maui) user applications, and should an application take a system trap, ATP preforms analysis on the dying application. All of the stack -backtraces of the application processes are gathered into a merged +backtraces of the application processes are gathered into a merged stack backtrace tree and written to disk as the file "atpMergedBT.dot". The stack backtrace for the first process to die is sent to stderr as is the number of the signal that caused the death. If the core file size @@ -172,12 +172,12 @@ An example output looks like: Application 427046 is crashing. ATP analysis proceeding... ATP Stack walkback for Rank 0 starting: -_start@start.S:118 -__libc_start_main@libc-start.c:289 -main@fail.c:65 -m_routine@fail.c:38 -calculation@fail.c:31 -do_task@fail.c:25 + _start@start.S:118 + __libc_start_main@libc-start.c:289 + main@fail.c:65 + m_routine@fail.c:38 + calculation@fail.c:31 + do_task@fail.c:25 ATP Stack walkback for Rank 0 done Process died with signal 8: 'Floating point exception' Forcing core dumps of ranks 0, 1 diff --git a/docs/Scientific_Computing/Profiling_and_Debugging/Profiler-ARM_MAP.md b/docs/Scientific_Computing/Profiling_and_Debugging/Profiler-ARM_MAP.md index aea6748be..567277131 100644 --- a/docs/Scientific_Computing/Profiling_and_Debugging/Profiler-ARM_MAP.md +++ b/docs/Scientific_Computing/Profiling_and_Debugging/Profiler-ARM_MAP.md @@ -116,7 +116,7 @@ After *submit*ting, MAP will wait until the job is allocated, connect to the processes, run the program, gather all the data and present the profile information. - +  ## MAP Profile diff --git a/docs/Scientific_Computing/Profiling_and_Debugging/Profiler-VTune.md b/docs/Scientific_Computing/Profiling_and_Debugging/Profiler-VTune.md index 5a95fc5ca..46c552e19 100644 --- a/docs/Scientific_Computing/Profiling_and_Debugging/Profiler-VTune.md +++ b/docs/Scientific_Computing/Profiling_and_Debugging/Profiler-VTune.md @@ -21,31 +21,31 @@ zendesk_section_id: 360000278935 -## What is VTune? +## What is VTune? -VTune is a **performance** analysis tool. +VTune is a **performance** analysis tool. It can be used to identify and **analyse** various aspects in both serial and parallel programs and can be used for both OpenMP and MPI -applications. +applications. It can be used with a command line interface (**CLI**) or a graphical -user interface (**GUI**). +user interface (**GUI**). + + +  - - - -## Where to find more resources on VTune? +## Where to find more resources on VTune? - Main page is at -[https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/vtune-profiler.html](https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/vtune-profiler.html#gs.bjani9) + [https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/vtune-profiler.html](https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/vtune-profiler.html#gs.bjani9) - Tutorials are available at - + ## How to use VTune? - + VTune is available on Mahuika by loading the VTune **module**. ``` sl @@ -69,14 +69,14 @@ amplxe-cl -help -## How do I profile an application with VTune? +## How do I profile an application with VTune? The hotspot analysis is the most commonly used analysis and generally the first approach to optimizing an application. -- Example on Mahuika with the matrix sample. -The matrix sample is composed of a pre-built matrix in C++ for -matrix multiplication. +- Example on Mahuika with the matrix sample. + The matrix sample is composed of a pre-built matrix in C++ for + matrix multiplication. ``` sl $ ml VTune/2019_update8 @@ -85,13 +85,13 @@ $ cd matrix $ amplxe-cl -collect hotspots ./matrix ``` - + The **amplxe-cl** command collects hotspots data. -The option **collect** specifies the collection experiment to run. +The option **collect** specifies the collection experiment to run. The option **hotspots** is to collect basic hotspots to have a general -performance overview. - +performance overview. + This is the type of output you are going to get: ``` sl @@ -157,7 +157,7 @@ amplxe: Executing actions 100 % done The output one receives the overall elapsed and idle times as well as the CPU times of the individual functions in descending order (list of -hotspots). +hotspots). The utilization of the CPUs is also analyzed and judged. @@ -175,3 +175,4 @@ module load VTune amplxe-gui --path-to-open ``` +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Profiling_and_Debugging/Slurm_Native_Profiling.md b/docs/Scientific_Computing/Profiling_and_Debugging/Slurm_Native_Profiling.md index 98067bd3f..366fa2a1e 100644 --- a/docs/Scientific_Computing/Profiling_and_Debugging/Slurm_Native_Profiling.md +++ b/docs/Scientific_Computing/Profiling_and_Debugging/Slurm_Native_Profiling.md @@ -25,10 +25,10 @@ Job resource usage can be determined on job completion by checking the following sacct columns; - MaxRSS - Peak memory usage. -- TotalCPU - Check *Elapsed* x *Alloc *≈*TotalCPU* +- TotalCPU - Check *Elapsed* x *Alloc *≈*TotalCPU*  However if you want to examine resource usage over the run-time of your -job, +job, the line `#SBATCH --profile task` can be added to your script. That will cause profile data to be recorded every 30 seconds throughout @@ -37,8 +37,8 @@ recommend increasing/decreasing that sampling frequency, so for example when profiling a job of less than 1 hour it would be OK to sample every second by adding `#SBATCH --acctg-freq=1`, and for a week long job the rate should be reduced to once every 5 -minutes: `#SBATCH --acctg-freq=300`. - +minutes: `#SBATCH --acctg-freq=300`. + On completion of your job, collate the data into an HDF5 file using `sh5util -j `, this will collect the results from the nodes where your job ran and write into an HDF5 file named: `job_.h5` @@ -47,7 +47,7 @@ You can plot the contents of this file with the command `nn_profile_plot job_.h5`, this will generate a file named `job__profile.png`. -Alternatively you could use one of the following scripts. +Alternatively you could use one of the following scripts.  - [Python](https://github.com/nesi/nesi-tools/blob/main/.dev_nn_profile_plot.py) - [MATLAB](https://github.com/CallumWalley/slurm_native_h5_plotter) diff --git a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_02-02-2023.md b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_02-02-2023.md index 8d9663953..5d3c193e8 100644 --- a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_02-02-2023.md +++ b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_02-02-2023.md @@ -25,11 +25,11 @@ zendesk_section_id: 360001150156 - Updated JupyterHub to v2.3.1 - Updated JupyterLab to v3.5.3 - Switched to Python 3.10 for running JupyterLab (kernels are -unaffected) -- Note: if you have previously installed Python packages in your -home directory using Python 3.10, we recommend cleaning out your -*~/.local/Python-3.10-gimkl-2022a* directory, as it could -conflict with our JupyterLab installation, and consider -[Installing packages in a Python virtual -environment](https://support.nesi.org.nz/hc/en-gb/articles/207782537-Python#installing_packages_in_a_python_virtual_environment) -instead \ No newline at end of file + unaffected) + - Note: if you have previously installed Python packages in your + home directory using Python 3.10, we recommend cleaning out your + *~/.local/Python-3.10-gimkl-2022a* directory, as it could + conflict with our JupyterLab installation, and consider + [Installing packages in a Python virtual + environment](https://support.nesi.org.nz/hc/en-gb/articles/207782537-Python#installing_packages_in_a_python_virtual_environment) + instead \ No newline at end of file diff --git a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_02-06-2022.md b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_02-06-2022.md index 636f9a784..af9399a76 100644 --- a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_02-06-2022.md +++ b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_02-06-2022.md @@ -26,31 +26,31 @@ zendesk_section_id: 360001150156 - Updated JupyterLab version to v3.4.2 - Updated -[RStudio-on-NeSI](https://support.nesi.org.nz/hc/en-gb/articles/360004337836) -(v0.22.5): fix library path when using NeSI R package in RStudio -(e.g. R-bundle-Bioconductor) + [RStudio-on-NeSI](https://support.nesi.org.nz/hc/en-gb/articles/360004337836) + (v0.22.5): fix library path when using NeSI R package in RStudio + (e.g. R-bundle-Bioconductor) - Plotly extension re-added (missing in the previous release) - Added [papermill](https://pypi.org/project/papermill/) extension - Updated [NeSI Virtual -Desktop](https://support.nesi.org.nz/hc/en-gb/articles/360001600235) -to v2.4.1 -- ``` sl -## Image changes -- Update default Firefox version. -- Update to use singularity 3.8.5. -- Switched to rocky8 image. -- Added chrome, strace, sview and xfce-terminal to image. -- Added some libraries need for ANSYS -- Added missing GLX libraries. - -## Bug fixes -- Fixed faulty startup messages -- Fixed entrypoint duplication issue. -- unset 'SLURM_EXPORT_ENV' before starting desktop. - -## Refactoring -- Removed dependency on system vdt repo. -- Removed faulty & uneeded bind paths. -- Removed debug by default and hardcoded verbose. -- replaced VDT_HOME with XDG equiv -``` \ No newline at end of file + Desktop](https://support.nesi.org.nz/hc/en-gb/articles/360001600235) + to v2.4.1 + - ``` sl + # Image changes + - Update default Firefox version. + - Update to use singularity 3.8.5. + - Switched to rocky8 image. + - Added chrome, strace, sview and xfce-terminal to image. + - Added some libraries need for ANSYS + - Added missing GLX libraries. + + # Bug fixes + - Fixed faulty startup messages + - Fixed entrypoint duplication issue. + - unset 'SLURM_EXPORT_ENV' before starting desktop. + + # Refactoring + - Removed dependency on system vdt repo. + - Removed faulty & uneeded bind paths. + - Removed debug by default and hardcoded verbose. + - replaced VDT_HOME with XDG equiv + ``` \ No newline at end of file diff --git a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_02-11-2021.md b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_02-11-2021.md index de64b3d62..fe83700ab 100644 --- a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_02-11-2021.md +++ b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_02-11-2021.md @@ -25,5 +25,6 @@ zendesk_section_id: 360001150156 ## New and Improved - Enabled jupyter server proxy to forward requests to a different host -(compute node). + (compute node). +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_12-05-2021.md b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_12-05-2021.md index 32a2bea30..6b6562d5b 100644 --- a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_12-05-2021.md +++ b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_12-05-2021.md @@ -24,8 +24,8 @@ zendesk_section_id: 360001150156 ## New and Improved -- JupyterLab upgrade to v3.0.15. -Read more on [user-facing -changes](https://jupyterlab.readthedocs.io/en/stable/getting_started/changelog.html#user-facing-changes) -and the installation of extensions here: -[https://jupyterlab.readthedocs.io/en/stable/user/extensions.html](https://jupyterlab.readthedocs.io/en/stable/user/extensions.html#finding-extensions) \ No newline at end of file +- JupyterLab upgrade to v3.0.15. + Read more on [user-facing + changes](https://jupyterlab.readthedocs.io/en/stable/getting_started/changelog.html#user-facing-changes) + and the installation of extensions here:  + [https://jupyterlab.readthedocs.io/en/stable/user/extensions.html](https://jupyterlab.readthedocs.io/en/stable/user/extensions.html#finding-extensions) \ No newline at end of file diff --git a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_12-07-2022.md b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_12-07-2022.md index 1059b8cf1..a766db06b 100644 --- a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_12-07-2022.md +++ b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_12-07-2022.md @@ -23,6 +23,7 @@ zendesk_section_id: 360001150156 ## New and Improved - Added the `pyviz_comms` package to allow fully interactive usage of -[HoloViz](https://holoviz.org/index.html) tools within notebooks (in -particular Panel and HoloViews). + [HoloViz](https://holoviz.org/index.html) tools within notebooks (in + particular Panel and HoloViews). +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_14-10-2021.md b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_14-10-2021.md index 9a1979723..8fde2b3be 100644 --- a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_14-10-2021.md +++ b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_14-10-2021.md @@ -25,6 +25,6 @@ zendesk_section_id: 360001150156 ## New and Improved - Changed hub session timeout to 16 hours. Users will be prompted to -login again after 16 hrs. aligned with max. wall time for JupyterLab -instances. + login again after 16 hrs. aligned with max. wall time for JupyterLab + instances.  - JupyterHub fixed: improvements to avoid 403 errors \ No newline at end of file diff --git a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_14-11-2023.md b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_14-11-2023.md index 49f44e884..9fa513cca 100644 --- a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_14-11-2023.md +++ b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_14-11-2023.md @@ -26,9 +26,9 @@ zendesk_section_id: 360001150156 ## Fixed - We are now closing user session when the corresponding Jupyter -server is stopped, to avoid idle sessions to linger on the host - + server is stopped, to avoid idle sessions to linger on the host +  If you have any questions about any of the improvements or fixes, please [contact NeSI diff --git a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_15-06-2023.md b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_15-06-2023.md index 0e5b01e31..19795ebf2 100644 --- a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_15-06-2023.md +++ b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_15-06-2023.md @@ -23,8 +23,8 @@ zendesk_section_id: 360001150156 ## New and Improved - If [jupyter.nesi.org.nz](http://my.nesi.org.nz/) portal cannot -connect to the NeSI server, a descriptive error message will be -displayed instead of internal error 500 + connect to the NeSI server, a descriptive error message will be + displayed instead of internal error 500 ## Fixed diff --git a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_16-09-2021.md b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_16-09-2021.md index 0fc7923f4..66c6900c4 100644 --- a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_16-09-2021.md +++ b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_16-09-2021.md @@ -24,13 +24,14 @@ zendesk_section_id: 360001150156 ## New and Improved -- JupyterLab upgrade to v3.1.9 (Python updated from v3.8 to v3.9) -Read more on [changes and bug -fixes](https://jupyterlab.readthedocs.io/en/stable/getting_started/changelog.html#id12) +- JupyterLab upgrade to v3.1.9 (Python updated from v3.8 to v3.9) + Read more on [changes and bug + fixes](https://jupyterlab.readthedocs.io/en/stable/getting_started/changelog.html#id12) - Updated to JupyterHub 1.4.2 -- Rendering time remaining, CPU and Memory usage in the top menu bar -![mceclip0.png](../../assets/images/jupyter-nesi-org-nz_release_notes_16-09-2021.png) +- Rendering time remaining, CPU and Memory usage in the top menu bar + ![mceclip0.png](../../assets/images/jupyter-nesi-org-nz_release_notes_16-09-2021.png) - Confirmed JupyterLab extension for version control using Git -working -See + working + See +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_19-05-2023.md b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_19-05-2023.md index 47add615c..d5b6d6588 100644 --- a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_19-05-2023.md +++ b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_19-05-2023.md @@ -23,4 +23,4 @@ zendesk_section_id: 360001150156 ## Fixed - Updated some Python packages in the Python 3.10 kernel to fix an -issue with ipywidgets not working properly in notebooks \ No newline at end of file + issue with ipywidgets not working properly in notebooks \ No newline at end of file diff --git a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_24-09-2021.md b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_24-09-2021.md index 9e84f4ef7..2aa8f5af6 100644 --- a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_24-09-2021.md +++ b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_24-09-2021.md @@ -27,4 +27,4 @@ zendesk_section_id: 360001150156 - Fixed Singularity version for RStudio and VirtualDesktop kernels - Fixed pywidgets installation - JupyterHub fixed: in case a job takes more than 300 seconds, don't -start the job to avoid 'ghost' instances of JupyterLab \ No newline at end of file + start the job to avoid 'ghost' instances of JupyterLab \ No newline at end of file diff --git a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_25-08-2022.md b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_25-08-2022.md index 6a3819f1d..f6ce0f085 100644 --- a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_25-08-2022.md +++ b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_25-08-2022.md @@ -23,24 +23,24 @@ zendesk_section_id: 360001150156 ## New and Improved - Updated [RStudio-on-NeSI](https://support.nesi.org.nz/hc/en-gb/articles/360004337836) -to v0.24.0 -- RStudio server v2022.07.1 -- Allow usage of NeSI environment modules in RStudio terminal -(beta) -- Allow usage of Slurm commands in RStudio terminal (beta) + to v0.24.0 + - RStudio server v2022.07.1 + - Allow usage of NeSI environment modules in RStudio terminal + (beta) + - Allow usage of Slurm commands in RStudio terminal (beta) - Updated [NeSI Virtual -Desktop](https://support.nesi.org.nz/hc/en-gb/articles/360001600235) -to v2.4.3 -- Utilising latest version of -[Singularity](https://support.nesi.org.nz/hc/en-gb/articles/360001107916) + Desktop](https://support.nesi.org.nz/hc/en-gb/articles/360001600235) + to v2.4.3 + - Utilising latest version of + [Singularity](https://support.nesi.org.nz/hc/en-gb/articles/360001107916) ## Fixed - RStudio -- Addressed issue preventing user installation of rmarkdown when -using R/4.1.0-gimkl-2020a -- Addressed knitr PDF compilation when using R/4.2.1-gimkl-2022a + - Addressed issue preventing user installation of rmarkdown when + using R/4.1.0-gimkl-2020a + - Addressed knitr PDF compilation when using R/4.2.1-gimkl-2022a - NeSI Virtual Desktop -- Added dependencies to fix OpenGL related issues -- Internal refactoring for maintenance purpose of the permission -with skeleton files in container build \ No newline at end of file + - Added dependencies to fix OpenGL related issues + - Internal refactoring for maintenance purpose of the permission + with skeleton files in container build \ No newline at end of file diff --git a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_28-06-2022.md b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_28-06-2022.md index 6c1dc8db8..44e0e23f6 100644 --- a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_28-06-2022.md +++ b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_28-06-2022.md @@ -26,9 +26,9 @@ zendesk_section_id: 360001150156 - Updated JupyterLab version to v3.4.3 - +  ## Fixed - Addressed issue handling the "slurm job id" with some Python modules -that depend on MPI \ No newline at end of file + that depend on MPI \ No newline at end of file diff --git a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_31-03-2022.md b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_31-03-2022.md index 534486c68..0c31fa387 100644 --- a/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_31-03-2022.md +++ b/docs/Scientific_Computing/Release_Notes_jupyter-nesi-org-nz/jupyter-nesi-org-nz_release_notes_31-03-2022.md @@ -25,7 +25,7 @@ zendesk_section_id: 360001150156 ## New and Improved - Updated JupyterLab version -to `JupyterLab/.2022.2.0-gimkl-2020a-3.2.8` + to `JupyterLab/.2022.2.0-gimkl-2020a-3.2.8` - Added user guidance on options (when launching a server instance) - Updated available GPU options - Added links to NeSI documentation \ No newline at end of file diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checking_your_projects_usage_using_nn_corehour_usage.md b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checking_your_projects_usage_using_nn_corehour_usage.md index a2353b893..9621730cf 100644 --- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checking_your_projects_usage_using_nn_corehour_usage.md +++ b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checking_your_projects_usage_using_nn_corehour_usage.md @@ -43,7 +43,7 @@ time. `-c`, `--calendar-months` Break usage down so that the time periods are the first and last days of -the calendar months, instead +the calendar months, instead of working back a month at a time from today. `-n`, `--number-of-months=NUM` @@ -58,7 +58,7 @@ when the cluster commenced operations. Display results for the user `USERNAME`. The default user is the current user. - +  Treat all subsequent entries on the command line, including those starting with a dash (`-`), as arguments instead of as options. diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checksums.md b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checksums.md index d4d406594..58312fbdb 100644 --- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checksums.md +++ b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Checksums.md @@ -30,11 +30,11 @@ Checksums can be used to check for minor errors that may have been introduced into a dataset. For example: - After downloading a file (compare your generated checksum with the -checksum provided by the vendor). + checksum provided by the vendor). - When copying a file onto the cluster (generate a checksum on your -local machine and another on the cluster). + local machine and another on the cluster). - Verifying your results/workflow. (making a checksum of a results -file can be a quick way to confirm nothing has changed). + file can be a quick way to confirm nothing has changed). - Corroborate files when working in a team. While not necessary to do in every case, every time, file integrity @@ -43,7 +43,7 @@ should be one of the first things you check when troubleshooting. ## Example The file '`corrupt.bin`' has had 1 byte changed, yet on inspection would -appear identical. +appear identical.  ``` sl -rw-rw-r--  1  393315  copy.bin @@ -63,10 +63,10 @@ ef749eb4110c2a3b3c747390095d0b76 corrupt.bin Note that filename, path, permissions or any other metadata does not affect the checksum. !!! prerequisite Note -Checksum functions are designed so that similar files *will not* -produce similar hashes. -You will only need to compare a few characters of the string to -confirm validity. + Checksum functions are designed so that similar files *will not* + produce similar hashes. + You will only need to compare a few characters of the string to + confirm validity. ## Commands @@ -80,5 +80,6 @@ commands. | SHA256 | `sha256sum `*`filename.txt`* | `certUtil -hashfile `*`filename.txt`*` sha256` | `shasum -a 256 `*`filename.txt`* | | MD5 | `md5sum `*`filename.txt`* | `certUtil -hashfile `*`filename.txt`*` md5` | `md5 `*`filename.txt`* | +  - +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share.md b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share.md index 47c8a83e4..86ab7718e 100644 --- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share.md +++ b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Fair_Share.md @@ -37,21 +37,21 @@ a given **period**. An institution also has a percentage **Fair Share entitlement** of each machine's deliverable capacity over that same period. !!! prerequisite Note -Although we use the term "Fair Share entitlement" in this article, it -bears only a loose relationship to an institution's contractual -entitlement to receive allocations from the NeSI HPC Compute & -Analytics service. The Fair Share entitlement is managed separately -for each cluster, and is adjusted as needed by NeSI staff so that each -institution can receive, as nearly as possible, its contractual -entitlement to the service as a whole, as well as a mix of cluster -hours that corresponds closely to the needs of that institution's -various project teams. + Although we use the term "Fair Share entitlement" in this article, it + bears only a loose relationship to an institution's contractual + entitlement to receive allocations from the NeSI HPC Compute & + Analytics service. The Fair Share entitlement is managed separately + for each cluster, and is adjusted as needed by NeSI staff so that each + institution can receive, as nearly as possible, its contractual + entitlement to the service as a whole, as well as a mix of cluster + hours that corresponds closely to the needs of that institution's + various project teams. - **Your project's expected rate of use** = (**your institution's Fair -Share entitlement** × **your project's allocation**) / (**sum of -your institution's allocations** × **period**) + Share entitlement** × **your project's allocation**) / (**sum of + your institution's allocations** × **period**) - **Your institution's expected rate of use** = your institution's -**Fair Share entitlement** on that machine + **Fair Share entitlement** on that machine If an entity — an institution or project team — is using the machine more slowly than expected, for Fair Share purposes it is considered a @@ -59,17 +59,17 @@ light user. By contrast, one using the machine faster than expected is a heavy user. - Projects at lightly using institutions get a higher Fair Share score -than those at heavily using institutions. + than those at heavily using institutions. - Within each institution, lightly using projects get a higher Fair -Share score than heavily using projects. + Share score than heavily using projects. - Using **faster** than your **expected rate of usage** will usually -cause your Fair Share score to **decrease**. The more extreme the -overuse, the more severe the likely drop. + cause your Fair Share score to **decrease**. The more extreme the + overuse, the more severe the likely drop. - Using **slower** than your **expected rate of usage** will usually -cause your Fair Share score to **increase**. The more extreme the -underuse, the greater the Fair Share bonus. + cause your Fair Share score to **increase**. The more extreme the + underuse, the greater the Fair Share bonus. - Using the cluster **unevenly** will cause your Fair Share score to -**decrease**. + **decrease**. ## What is Fair Share? @@ -89,23 +89,23 @@ days) — and thus the expected rates of use of those same allocations. Therefore: - If the size of your allocation increases, your project's share of -the cluster will increase. Conversely, if the size of your -allocation decreases, your project's share of the cluster will -decrease. + the cluster will increase. Conversely, if the size of your + allocation decreases, your project's share of the cluster will + decrease. - If the size of another project's allocation increases, your -project's share of the cluster will decrease, since, even though -your allocation's size has remained the same, the total size of -other allocations has increased and thus your allocation's share has -decreased. Conversely, if the size of the other project's allocation -decreases, your project's share of the cluster will increase. + project's share of the cluster will decrease, since, even though + your allocation's size has remained the same, the total size of + other allocations has increased and thus your allocation's share has + decreased. Conversely, if the size of the other project's allocation + decreases, your project's share of the cluster will increase. - If the cluster gets larger (e.g. we purchase and install more -computing capacity), your project's share of the cluster will not -change, but that share of the cluster will correspond to a higher -rate of core hour usage. This situation will only last until more -allocations are issued, or existing allocations are made larger, to -take advantage of the increased capacity. The opposite will occur if -the cluster shrinks, though cluster shrinkage is not expected to -occur. + computing capacity), your project's share of the cluster will not + change, but that share of the cluster will correspond to a higher + rate of core hour usage. This situation will only last until more + allocations are issued, or existing allocations are made larger, to + take advantage of the increased capacity. The opposite will occur if + the cluster shrinks, though cluster shrinkage is not expected to + occur. On Mahuika and the Māui XC nodes, Fair Share is not designed to ensure that all project teams get the same share of the cluster. @@ -176,11 +176,11 @@ page](https://slurm.schedmd.com/priority_multifactor.html#fairshare) ## How do I check my project's Fair Share score? - The command `nn_corehour_usage `, on a Mahuika or Māui -login node, will show, along with other information, the current -fair share score and ranking of the specified project. + login node, will show, along with other information, the current + fair share score and ranking of the specified project. - The `sshare` command, on a Mahuika login node, will show the fair -share tree. A related command, `nn_sshare_sorted`, will show -projects in order from the highest fair share score to the lowest. + share tree. A related command, `nn_sshare_sorted`, will show + projects in order from the highest fair share score to the lowest. In our current configuration, Fair Share scores are attached to projects, not to individual users. diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md index a4146f698..f7bd34337 100644 --- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md +++ b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/GPU_use_on_NeSI.md @@ -27,19 +27,19 @@ For application specific settings (e.g. OpenMP, Tensorflow on GPU, ...), please have a look at the dedicated pages listed at the end of this page. !!! prerequisite Important -An overview of available GPU cards is available in the [Available GPUs -on NeSI](https://support.nesi.org.nz/hc/en-gb/articles/4963040656783) -support page. -Details about GPU cards for each system and usage limits are in the -[Mahuika Slurm -Partitions](https://support.nesi.org.nz/hc/en-gb/articles/360000204076) -and [Māui\_Ancil (CS500) Slurm -Partitions](https://support.nesi.org.nz/hc/en-gb/articles/360000204116#_Toc514341606) -support pages. -Details about pricing in terms of compute units can be found in the -[What is an -allocation?](https://support.nesi.org.nz/hc/en-gb/articles/360001385735) -page. + An overview of available GPU cards is available in the [Available GPUs + on NeSI](https://support.nesi.org.nz/hc/en-gb/articles/4963040656783) + support page. + Details about GPU cards for each system and usage limits are in the + [Mahuika Slurm + Partitions](https://support.nesi.org.nz/hc/en-gb/articles/360000204076) + and [Māui\_Ancil (CS500) Slurm + Partitions](https://support.nesi.org.nz/hc/en-gb/articles/360000204116#_Toc514341606) + support pages. + Details about pricing in terms of compute units can be found in the + [What is an + allocation?](https://support.nesi.org.nz/hc/en-gb/articles/360001385735) + page. ## Request GPU resources using Slurm @@ -64,85 +64,85 @@ cases: - 1 P100 GPU on Mahuika -``` sl -#SBATCH --gpus-per-node=P100:1 -``` + ``` sl + #SBATCH --gpus-per-node=P100:1 + ``` - 1 P100 GPU on Māui Ancillary Nodes -``` sl -#SBATCH --partition=nesi_gpu -#SBATCH --gpus-per-node=1 -``` + ``` sl + #SBATCH --partition=nesi_gpu + #SBATCH --gpus-per-node=1 + ``` - 2 P100 GPUs per node on Mahuika -``` sl -#SBATCH --gpus-per-node=P100:2 -``` + ``` sl + #SBATCH --gpus-per-node=P100:2 + ``` -*You cannot ask for more than 2 P100 GPU per node on Mahuika.* + *You cannot ask for more than 2 P100 GPU per node on Mahuika.* - 1 A100 (40GB) GPU on Mahuika -``` sl -#SBATCH --gpus-per-node=A100:1 -``` + ``` sl + #SBATCH --gpus-per-node=A100:1 + ``` - 2 A100 (40GB) GPUs on Mahuika -``` sl -#SBATCH --gpus-per-node=A100:2 -``` + ``` sl + #SBATCH --gpus-per-node=A100:2 + ``` -*You cannot ask for more than 2 A100 (40GB) GPUs per node on -Mahuika.* + *You cannot ask for more than 2 A100 (40GB) GPUs per node on + Mahuika.* - 1 A100-1g.5gb GPU on Mahuika -``` sl -#SBATCH --gpus-per-node=A100-1g.5gb:1 -``` + ``` sl + #SBATCH --gpus-per-node=A100-1g.5gb:1 + ``` -*This type of GPU is limited to 1 job per user and recommended for -development and debugging.* + *This type of GPU is limited to 1 job per user and recommended for + development and debugging.* - 1 A100 (80GB) GPU on Mahuika -``` sl -#SBATCH --partition=hgx -#SBATCH --gpus-per-node=A100:1 -``` + ``` sl + #SBATCH --partition=hgx + #SBATCH --gpus-per-node=A100:1 + ``` -*These GPUs are on Milan nodes, check the [dedicated support -page](https://support.nesi.org.nz/knowledge/articles/6367209795471/) -for more information.* + *These GPUs are on Milan nodes, check the [dedicated support + page](https://support.nesi.org.nz/knowledge/articles/6367209795471/) + for more information.* - 4 A100 (80GB & NVLink) GPU on Mahuika -``` sl -#SBATCH --partition=hgx -#SBATCH --gpus-per-node=A100:4 -``` + ``` sl + #SBATCH --partition=hgx + #SBATCH --gpus-per-node=A100:4 + ``` -*These GPUs are on Milan nodes, check the [dedicated support -page](https://support.nesi.org.nz/knowledge/articles/6367209795471/) -for more information.* + *These GPUs are on Milan nodes, check the [dedicated support + page](https://support.nesi.org.nz/knowledge/articles/6367209795471/) + for more information.* -*You cannot ask for more than 4 A100 (80GB) GPUs per node on -Mahuika.* + *You cannot ask for more than 4 A100 (80GB) GPUs per node on + Mahuika.* - 1 A100 GPU on Mahuika, regardless of the type -``` sl -#SBATCH --partition=gpu,hgx -#SBATCH --gpus-per-node=A100:1 -``` + ``` sl + #SBATCH --partition=gpu,hgx + #SBATCH --gpus-per-node=A100:1 + ``` -*With this configuration, your job will spend less time in the -queue, using whichever A100 GPU is available. It may land on a -regular Mahuika node (A100 40GB GPU) or on a Milan node (A100 80GB -GPU).* + *With this configuration, your job will spend less time in the + queue, using whichever A100 GPU is available. It may land on a + regular Mahuika node (A100 40GB GPU) or on a Milan node (A100 80GB + GPU).* You can also use the `--gpus-per-node`option in [Slurm interactive sessions](https://support.nesi.org.nz/hc/en-gb/articles/360001316356), @@ -155,17 +155,17 @@ srun --job-name "InteractiveGPU" --gpus-per-node 1 --cpus-per-task 8 --mem 2GB - will request and then start a bash session with access to a GPU, for a duration of 30 minutes. !!! prerequisite Important -When you use the `--gpus-per-node`option, Slurm automatically sets the -`CUDA_VISIBLE_DEVICES` environment variable inside your job -environment to list the index/es of the allocated GPU card/s on each -node. -``` sl -$ srun --job-name "GPUTest" --gpus-per-node=P100:2 --time 00:05:00 --pty bash -srun: job 20015016 queued and waiting for resources -srun: job 20015016 has been allocated resources -$ echo $CUDA_VISIBLE_DEVICES -0,1 -``` + When you use the `--gpus-per-node`option, Slurm automatically sets the + `CUDA_VISIBLE_DEVICES` environment variable inside your job + environment to list the index/es of the allocated GPU card/s on each + node. + ``` sl + $ srun --job-name "GPUTest" --gpus-per-node=P100:2 --time 00:05:00 --pty bash + srun: job 20015016 queued and waiting for resources + srun: job 20015016 has been allocated resources + $ echo $CUDA_VISIBLE_DEVICES + 0,1 + ``` ## Load CUDA and cuDNN modules @@ -187,17 +187,17 @@ module spider CUDA Please contact us at if you need a version not available on the platform. !!! prerequisite Note -On Māui Ancillary Nodes, use `module avail CUDA` to list available -versions. + On Māui Ancillary Nodes, use `module avail CUDA` to list available + versions. The CUDA module also provides access to additional command line tools: - - - [**nvidia-smi**](https://developer.nvidia.com/nvidia-system-management-interface) -to directly monitor GPU resource utilisation, -- [**nvcc**](https://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html) -to compile CUDA programs, -- [**cuda-gdb**](https://docs.nvidia.com/cuda/cuda-gdb/index.html) -to debug CUDA applications. + to directly monitor GPU resource utilisation, + - [**nvcc**](https://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/index.html) + to compile CUDA programs, + - [**cuda-gdb**](https://docs.nvidia.com/cuda/cuda-gdb/index.html) + to debug CUDA applications. In addition, the [cuDNN](https://developer.nvidia.com/cudnn) (NVIDIA CUDA® Deep Neural Network library) library is accessible via its @@ -251,9 +251,9 @@ The content of job output file would look like: $ cat slurm-20016124.out The following modules were not unloaded: -(Use "module --force purge" to unload all): + (Use "module --force purge" to unload all): -1) slurm 2) NeSI + 1) slurm 2) NeSI Wed May 12 12:08:27 2021 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 | @@ -277,8 +277,8 @@ Wed May 12 12:08:27 2021 CUDA_VISIBLE_DEVICES=0 ``` !!! prerequisite Note -CUDA\_VISIBLE\_DEVICES=0 indicates that this job was allocated to CUDA -GPU index 0 on this node. It is not a count of allocated GPUs. + CUDA\_VISIBLE\_DEVICES=0 indicates that this job was allocated to CUDA + GPU index 0 on this node. It is not a count of allocated GPUs. ## NVIDIA Nsight Systems and Compute profilers @@ -318,17 +318,17 @@ line tool or the [ncu-ui](https://docs.nvidia.com/nsight-compute/NsightCompute/index.html) graphical interface. !!! prerequisite Important -The `nsys-ui` and `ncu-ui` tools require access to a display server, -either via -[X11](https://support.nesi.org.nz/hc/en-gb/articles/360001075975-X11-on-NeSI) -or a [Virtual -Desktop](https://support.nesi.org.nz/hc/en-gb/articles/360001600235-Virtual-Desktop-via-Jupyter-on-NeSI). -You also need to load the `PyQt` module beforehand: -``` sl -module load PyQt/5.12.1-gimkl-2020a-Python-3.8.2 -module load Nsight-Systems/2020.5.1 -nsys-ui # this will work only if you have a graphical session -``` + The `nsys-ui` and `ncu-ui` tools require access to a display server, + either via + [X11](https://support.nesi.org.nz/hc/en-gb/articles/360001075975-X11-on-NeSI) + or a [Virtual + Desktop](https://support.nesi.org.nz/hc/en-gb/articles/360001600235-Virtual-Desktop-via-Jupyter-on-NeSI). + You also need to load the `PyQt` module beforehand: + ``` sl + module load PyQt/5.12.1-gimkl-2020a-Python-3.8.2 + module load Nsight-Systems/2020.5.1 + nsys-ui # this will work only if you have a graphical session + ``` ## Application and toolbox specific support pages @@ -338,16 +338,16 @@ applications: - [ABAQUS](https://support.nesi.org.nz/hc/en-gb/articles/212457807-ABAQUS#gpus) - [GROMACS](https://support.nesi.org.nz/hc/en-gb/articles/360000792856-GROMACS#nvidia_gpu_container) - [Lambda -Stack](https://support.nesi.org.nz/hc/en-gb/articles/360002558216-Lambda-Stack) + Stack](https://support.nesi.org.nz/hc/en-gb/articles/360002558216-Lambda-Stack) - [Matlab](https://support.nesi.org.nz/hc/en-gb/articles/212639047-MATLAB#GPU) - [TensorFlow on -GPUs](https://support.nesi.org.nz/hc/en-gb/articles/360000990436-TensorFlow-on-GPUs) + GPUs](https://support.nesi.org.nz/hc/en-gb/articles/360000990436-TensorFlow-on-GPUs) And programming toolkits: - [Offloading to GPU with -OpenMP](https://support.nesi.org.nz/hc/en-gb/articles/360001127856-Offloading-to-GPU-with-OpenMP-) + OpenMP](https://support.nesi.org.nz/hc/en-gb/articles/360001127856-Offloading-to-GPU-with-OpenMP-) - [Offloading to GPU with OpenACC using the Cray -compiler](https://support.nesi.org.nz/hc/en-gb/articles/360001131076-Offloading-to-GPU-with-OpenACC-using-the-Cray-compiler) + compiler](https://support.nesi.org.nz/hc/en-gb/articles/360001131076-Offloading-to-GPU-with-OpenACC-using-the-Cray-compiler) - [NVIDIA GPU -Containers](https://support.nesi.org.nz/hc/en-gb/articles/360001500156-NVIDIA-GPU-Containers) \ No newline at end of file + Containers](https://support.nesi.org.nz/hc/en-gb/articles/360001500156-NVIDIA-GPU-Containers) \ No newline at end of file diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md index 824c30fb9..e7779d4f8 100644 --- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md +++ b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Hyperthreading.md @@ -35,7 +35,7 @@ and share some hardware with nearby logical cores. Physical cores are made up of two logical cores. Hyperthreading is enabled by default on NeSI machines, meaning, by -default, Slurm will allocate two threads to each physical core. +default, Slurm will allocate two threads to each physical core.  ## Hyperthreading with slurm @@ -51,7 +51,7 @@ of running Hyperthreaded (for example using [OpenMP](https://support.nesi.org.nz/hc/en-gb/articles/360001070496)) if `--cpus-per-task > 1`. - +  Setting `--hint=nomultithread` with srun or sbatch "causes Slurm to allocate only one thread from each core to this job". This will allocate @@ -91,43 +91,43 @@ considered a bonus. ## How to use Hyperthreading - Non-hyperthreaded jobs which use  `--mem-per-cpu` requests should -halve their memory requests as those are based on memory per logical -CPU, not per the number of threads or tasks.  For non-MPI jobs, or -for MPI jobs that request the same number of tasks on every node, we -recommend to specify `--mem` (i.e. memory per node) instead. See -[How to request memory -(RAM)](https://support.nesi.org.nz/hc/en-gb/articles/360001108756) -for more information. + halve their memory requests as those are based on memory per logical + CPU, not per the number of threads or tasks.  For non-MPI jobs, or + for MPI jobs that request the same number of tasks on every node, we + recommend to specify `--mem` (i.e. memory per node) instead. See + [How to request memory + (RAM)](https://support.nesi.org.nz/hc/en-gb/articles/360001108756) + for more information. - Non-MPI jobs which specify `--cpus-per-task` and use **srun** should -also set `--ntasks=1`, otherwise the program will be run twice in -parallel, halving the efficiency of the job. + also set `--ntasks=1`, otherwise the program will be run twice in + parallel, halving the efficiency of the job. The precise rules about when Hyperthreading applies are as follows: ------------------------ ------------------------ ------------------------ -Mahuika Māui - -Jobs Never share physical -cores + ----------------------- ------------------------ ------------------------ +   Mahuika Māui -MPI tasks within the Never share physical Share physical cores by -same job cores default. You can -override this behaviour -by using -`--hint=nomultithread` -in your job submission -script. + Jobs Never share physical + cores -Threads within the same Share physical cores by -task default. You can -override this behaviour -by using -`--hint=nomultithread` -in your job submission -script. ------------------------ ------------------------ ------------------------ + MPI tasks within the Never share physical Share physical cores by + same job cores default. You can + override this behaviour + by using + `--hint=nomultithread` + in your job submission + script. + Threads within the same Share physical cores by + task default. You can + override this behaviour + by using + `--hint=nomultithread` + in your job submission + script. + ----------------------- ------------------------ ------------------------ +  ### How many logical CPUs will my job use or be charged for? @@ -254,3 +254,4 @@ such that N × (tasks per node) = 40.

+  \ No newline at end of file diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_Checkpointing.md b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_Checkpointing.md index 83f3b3e0e..e1c57b0f7 100644 --- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_Checkpointing.md +++ b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_Checkpointing.md @@ -29,11 +29,11 @@ How checkpointing can be implemented depends on the application/code being used, some will have inbuilt methods whereas others might require some scripting. -## Queueing +## Queueing  Checkpointing code has the added advantage that it allows you to split your work into smaller jobs, allowing them to move through the queue -faster. +faster.  Below is an example of submitting the same job again, if previous has run successfully. @@ -41,7 +41,7 @@ run successfully. ``` sl # Slurm header '#SBATCH etc etc -sbatch --dependency=afterok:${SLURM_JOB_ID} "$0" +sbatch --dependency=afterok:${SLURM_JOB_ID} "$0" # "$0" is equal to the name of this script. # Code that implements checkpointing @@ -56,7 +56,7 @@ Another example for a job requiring explicit step inputs. n_steps=1000 starting_step=${1:-0} # Will be equal to first argument, or '0' if unset. -ending_step=$(( starting_step + n_steps )) +ending_step=$(( starting_step + n_steps )) # Submit next step with starting step equal to ending step of this job. sbatch --dependency=afterok:${SLURM_JOB_ID} "$0" ${ending_step} @@ -77,12 +77,12 @@ checkpoint='checkpoint_2020-03-09T0916.mat'; if exist(checkpoint,'file')==2, load(checkpoint);startindex=i;else startindex=1;end for i = startindex:100 -% Long running process + % Long running process -% Save workspace at end of each loop. -save(['checkpoint_', datestr(now, 'yyyy-mm-ddTHHMM')]) + % Save workspace at end of each loop. + save(['checkpoint_', datestr(now, 'yyyy-mm-ddTHHMM')]) end ``` !!! prerequisite Tip -We ***strongly*** recommend implementing checkpointing on any job -running longer than 3 days! \ No newline at end of file + We ***strongly*** recommend implementing checkpointing on any job + running longer than 3 days! \ No newline at end of file diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_prioritisation.md b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_prioritisation.md index 543c5eaf8..6cbce32a1 100644 --- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_prioritisation.md +++ b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Job_prioritisation.md @@ -31,7 +31,7 @@ Priority scores are determined by a number of factors: ## 1) Quality of Service The "debug" Quality of Service can be gained by adding the `sbatch` -command line option `--qos=debug`. +command line option `--qos=debug`. This adds 5000 to the job priority so raises it above all non-debug jobs, but is limited to one small job per user at a time: no more than 15 minutes and no more than 2 nodes. @@ -46,7 +46,7 @@ recent past compared to their expected rate of use (either by submitting and running many jobs, or by submitting and running large jobs) will have a lower priority, and projects with little recent activity compared to their expected rate of use will see their waiting jobs start sooner. -Fair Share contributes up to 1000 points to the job priority. To see + Fair Share contributes up to 1000 points to the job priority. To see the recent usage and current fair-share score of a project, you can use the command nn\_corehour\_usage. @@ -91,7 +91,7 @@ Jobs with a priority of 0 are in a "held" state and will never start without further intervention.  You can hold jobs with the command `scontrol hold ` and release them with `scontrol release `.  Jobs can also end up in this state when -they get requeued after a node failure. +they get requeued after a node failure.  ## Other Limits @@ -116,3 +116,4 @@ done on the HPCs. More information about backfill can be found [here](https://slurm.schedmd.com/sched_config.html). +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Mahuika_Slurm_Partitions.md b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Mahuika_Slurm_Partitions.md index c7e39f3f5..5c2cf1f16 100644 --- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Mahuika_Slurm_Partitions.md +++ b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Mahuika_Slurm_Partitions.md @@ -50,11 +50,11 @@ or more threads. All of a task's threads must run within the same node. ## General Limits - No individual job can request more than 20,000 CPU hours. This has -the consequence that a job can request more CPUs if it is shorter -(short-and-wide vs long-and-skinny). + the consequence that a job can request more CPUs if it is shorter + (short-and-wide vs long-and-skinny). - No individual job can request more than 576 CPUs (8 full nodes), -since larger MPI jobs are scheduled less efficiently and are -probably suitable for running on Māui. + since larger MPI jobs are scheduled less efficiently and are + probably suitable for running on Māui. - No user can have more than 1,000 jobs in the queue at a time. These limits are defaults and can be altered on a per-account basis if @@ -89,7 +89,7 @@ E.g.: sbatch: "bigmem" is not the most appropriate partition for this job, which would otherwise default to "large". If you believe this is incorrect then please contact support@nesi.org.nz and quote the Job ID number. ``` - +  @@ -148,7 +148,7 @@ partition.

+ 8

@@ -235,10 +235,10 @@ Nodes. See below for more info.

\*\*\* 1 NVIDIA Tesla A100 PCIe 40GB card divided into [7 MIG GPU slices](https://www.nvidia.com/en-us/technologies/multi-instance-gpu/) -(5GB each). +(5GB each).  \*\*\*\* NVIDIA Tesla A100 80GB, on a HGX baseboard with NVLink -GPU-to-GPU interconnect between the 4 GPUs +GPU-to-GPU interconnect between the 4 GPUs ## Quality of Service @@ -281,14 +281,14 @@ more details about Slurm and CUDA settings. - There is a per-project limit of 6 GPUs being used at a time. - There is also a per-project limit of 360 GPU-hours being allocated -to running jobs. This reduces the number of GPUs available for -longer jobs, so for example you can use 8 GPUs at a time if your -jobs run for a day, but only two GPUs if your jobs run for a week. -The intention is to guarantee that all users can get short debugging -jobs on to a GPU in a reasonably timely manner. + to running jobs. This reduces the number of GPUs available for + longer jobs, so for example you can use 8 GPUs at a time if your + jobs run for a day, but only two GPUs if your jobs run for a week. + The intention is to guarantee that all users can get short debugging + jobs on to a GPU in a reasonably timely manner.   - Each GPU job can use no more than 64 CPUs.  This is to ensure that -GPUs are not left idle just because their node has no remaining free -CPUs. + GPUs are not left idle just because their node has no remaining free + CPUs. - There is a limit of one A100-1g.5gb GPU job per user. ### Accessing A100 GPUs in the `hgx` partition @@ -299,11 +299,11 @@ connected via [NVLink](https://www.nvidia.com/en-us/data-center/nvlink/)): - Explicitly specify the partition to access them, with -`--partition=hgx`. + `--partition=hgx`. - Hosting nodes are Milan nodes. Check the [dedicated support -page](https://support.nesi.org.nz/hc/en-gb/articles/6367209795471) -for more information about the Milan nodes' differences from -Mahuika's Broadwell nodes. + page](https://support.nesi.org.nz/hc/en-gb/articles/6367209795471) + for more information about the Milan nodes' differences from + Mahuika's Broadwell nodes. ## Mahuika Infiniband Islands diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Maui_Slurm_Partitions.md b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Maui_Slurm_Partitions.md index c9cf6776c..129ca067e 100644 --- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Maui_Slurm_Partitions.md +++ b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Maui_Slurm_Partitions.md @@ -22,8 +22,8 @@ zendesk_section_id: 360000030876 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Important -Partitions on these systems that may be used for NeSI workloads carry -the prefix **nesi\_**. + Partitions on these systems that may be used for NeSI workloads carry + the prefix **nesi\_**. @@ -138,7 +138,7 @@ nodes. ## Māui\_Ancil (CS500) Slurm Partitions - + 

milan

7 days

56
-8

256
256

 

@@ -254,3 +254,4 @@ See [GPU use on NeSI](https://support.nesi.org.nz/hc/en-gb/articles/360001471955) for more details about Slurm and CUDA settings. +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md index 335e591d7..fd35a8d52 100644 --- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md +++ b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Milan_Compute_Nodes.md @@ -19,14 +19,14 @@ zendesk_section_id: 360000030876 [//]: <> (^^^^^^^^^^^^^^^^^^^^) [//]: <> (REMOVE ME IF PAGE VALIDATED) - +  ## How to access To use Mahuika's Milan nodes, you will need to explicitly specify the `milan` partition in your `sbatch` command line. Jobs are submitted from the same Mahuika login node that you currently use, and share the same -file system as other cluster nodes. +file system as other cluster nodes.  ``` sl sbatch -p milan ... @@ -44,7 +44,7 @@ the job description file: Each node has two AMD Milan CPUs, each with 8 "chiplets" of 8 cores and one level 3 cache, so each node has a total of **128 cores** or 256 hyperthreaded CPUs. This represents a significant increase of the number -CPUs per node compared to the Broadwell nodes (36 cores). +CPUs per node compared to the Broadwell nodes (36 cores).  The memory available to Slurm jobs is 512GB per node, so approximately 2GB per CPU. There are 64 nodes available, 8 of which will have double @@ -61,8 +61,8 @@ move from 7 to 8 is more significant than the move from Centos to Rocky. Many system libraries have changed version numbers between versions 7 and 8, so **some software compiled on Centos 7 will not run as-is on Rocky 8**. This can result in the runtime error -`error while loading shared libraries:... cannot open shared object file`, -which can be fixed by providing a copy of the old system library. +`error while loading shared libraries:... cannot open shared object file`,  +which can be fixed by providing a copy of the old system library.   We have repaired several of our existing environment modules that way. For programs which you have compiled yourself, we have installed a new @@ -101,7 +101,7 @@ In many ways, Intel's MKL is the best implementation of the BLAS and LAPACK libraries to which we have access, which is why we use it in our "*intel*" and "*gimkl*" toolchains.  Unfortunately, recent versions of MKL deliberately choose not to use the accelerated AVX instructions when -not running on an Intel CPU. +not running on an Intel CPU.   In order to persuade MKL to use the same fast optimised kernels on the new AMD Milan CPUs, you can do: @@ -116,7 +116,7 @@ We have set that as the default for our most recent toolchain Two alternative implementations have also been installed: OpenBLAS and BLIS. If you try them then please let us know if they work better than MKL for your application. BLIS is expected to perform well as a BLAS -alternative but not match MKL's LAPACK performance. +alternative but not match MKL's LAPACK performance.   ### Do I need to recompile my code? diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/NetCDF-HDF5_file_locking.md b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/NetCDF-HDF5_file_locking.md index fc40a26a3..2c2ec2263 100644 --- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/NetCDF-HDF5_file_locking.md +++ b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/NetCDF-HDF5_file_locking.md @@ -52,7 +52,7 @@ ncdump: /path/to/file.nc: NetCDF: HDF error or ``` sl -Error in EM_FOPEN: NetCDF: HDF error - /path/to/file.nc +Error in EM_FOPEN: NetCDF: HDF error - /path/to/file.nc ``` or @@ -86,6 +86,6 @@ application. For more information see: - [Design -File Locking under SWMR in -HDF5](https://support.hdfgroup.org/HDF5/docNewFeatures/SWMR/Design-HDF5-FileLocking.pdf) + HDF5](https://support.hdfgroup.org/HDF5/docNewFeatures/SWMR/Design-HDF5-FileLocking.pdf) - [release notes, where mechanism for disabling file locking was -introduced](https://support.hdfgroup.org/ftp/HDF5/releases/ReleaseFiles/hdf5-1.10.1-RELEASE.txt) \ No newline at end of file + introduced](https://support.hdfgroup.org/ftp/HDF5/releases/ReleaseFiles/hdf5-1.10.1-RELEASE.txt) \ No newline at end of file diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/SLURM-Best_Practice.md b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/SLURM-Best_Practice.md index 42f4ced5e..547239ee1 100644 --- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/SLURM-Best_Practice.md +++ b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/SLURM-Best_Practice.md @@ -27,7 +27,7 @@ stop immediately rather than attempting to continue on with an unexpected environment or erroneous intermediate data.  It also ensures that your failed jobs show a status of FAILED in *sacct* output. -### Resources +### Resources  Don't request more resources (CPUs, memory, GPUs) than you will need. In addition to using your core hours faster, resources intensive jobs will diff --git a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md index f14d09167..dccf1f16c 100644 --- a/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md +++ b/docs/Scientific_Computing/Running_Jobs_on_Maui_and_Mahuika/Slurm_Interactive_Sessions.md @@ -25,12 +25,12 @@ you to use them interactively as you would the login node. There are two main commands that can be used to make a session, `srun` and `salloc`, both of which use most of the same options available to `sbatch` (see [our Slurm Reference -Sheet](https://support.nesi.org.nz/hc/en-gb/articles/360000691716)). +Sheet](https://support.nesi.org.nz/hc/en-gb/articles/360000691716)).  !!! prerequisite Warning -An interactive session will, once it starts, use the entire requested -block of CPU time and other resources unless earlier exited from, even -if unused. To avoid unnecessary charges to your project, don't forget -to exit an interactive session once finished. + An interactive session will, once it starts, use the entire requested + block of CPU time and other resources unless earlier exited from, even + if unused. To avoid unnecessary charges to your project, don't forget + to exit an interactive session once finished. ## Using 'srun --pty bash' @@ -44,7 +44,7 @@ For example; srun --account nesi12345 --job-name "InteractiveJob" --cpus-per-task 8 --mem-per-cpu 1500 --time 24:00:00 --pty bash ``` -You will receive a message. + You will receive a message. ``` sl srun: job 10256812 queued and waiting for resources @@ -76,7 +76,7 @@ For example: salloc --account nesi12345 --job-name "InteractiveJob" --cpus-per-task 8 --mem-per-cpu 1500 --time 24:00:00 ``` -You will receive a message. + You will receive a message. ``` sl salloc: Pending job allocation 10256925 @@ -87,7 +87,7 @@ And when the job starts; ``` sl salloc: job 10256925 has been allocated resources -salloc: Granted job allocation 10256925 +salloc: Granted job allocation 10256925 [mahuika01~ SUCCESS ]$ ``` @@ -105,24 +105,24 @@ not available. You can request a start time using the `--begin` flag. The `--begin` flag takes either absolute or relative times as values. !!! prerequisite Warning -If you specify absolute dates and/or times, Slurm will interpret those -according to your environment's current time zone. Ensure that you -know what time zone your environment is using, for example by running -`date` in the same terminal session. + If you specify absolute dates and/or times, Slurm will interpret those + according to your environment's current time zone. Ensure that you + know what time zone your environment is using, for example by running + `date` in the same terminal session. - `--begin=16:00` means start the job no earlier than 4 p.m. today. -(Seconds are optional, but the time must be given in 24-hour -format.) + (Seconds are optional, but the time must be given in 24-hour + format.) - `--begin=11/05/20` means start the job on (or after) 5 -November 2020. Note that Slurm uses American date formats. -`--begin=2020-11-05` is another Slurm-acceptable way of saying the -same thing, and possibly easier for a New Zealander. + November 2020. Note that Slurm uses American date formats. + `--begin=2020-11-05` is another Slurm-acceptable way of saying the + same thing, and possibly easier for a New Zealander. - `--begin=2020-11-05T16:00:00` means start the job on (or after) 4 -p.m. on 5 November 2020. + p.m. on 5 November 2020. - `--begin=now+1hour` means wait at least one hour before starting the -job. + job. - `--begin=now+60` means wait at least one minute before starting the -job. + job. If no `--begin` argument is given, the default behaviour is to start as soon as possible. @@ -146,15 +146,15 @@ you leave your workstation unattended for a while, in case your computer turns off or goes to sleep or its connection to the internet is disrupted while you're away. - +  ## Setting up a detachable terminal !!! prerequisite Warning -If you don't request your interactive session from within a detachable -terminal, any interruption to the controlling terminal, for example by -your computer going to sleep or losing its connection to the internet, -will permanently cancel that interactive session and remove it from -the queue, whether it has started or not. + If you don't request your interactive session from within a detachable + terminal, any interruption to the controlling terminal, for example by + your computer going to sleep or losing its connection to the internet, + will permanently cancel that interactive session and remove it from + the queue, whether it has started or not. 1. Log in to a Mahuika, Māui or Māui-ancil login node. 2. Start up `tmux` or `screen`. @@ -180,9 +180,9 @@ time. Slurm offers an easy solution: Identify the job, and use `scontrol` to postpone its start time. !!! prerequisite Note -Job IDs are unique to each cluster but not across the whole of NeSI. -Therefore, `scontrol` must be run on a node belonging to the cluster -where the job is queued. + Job IDs are unique to each cluster but not across the whole of NeSI. + Therefore, `scontrol` must be run on a node belonging to the cluster + where the job is queued. The following command will delay the start of the job with numeric ID 12345678 until (at the earliest) 9:30 a.m. the next day: @@ -198,9 +198,9 @@ until (at the earliest) 9:30 a.m. on Monday: scontrol update jobid=12345678 StartTime=now+3daysT09:30:00 ``` !!! prerequisite Warning -Don't just set `StartTime=tomorrow` with no time specification unless -you like the idea of your interactive session starting at midnight or -in the wee small hours of the morning. + Don't just set `StartTime=tomorrow` with no time specification unless + you like the idea of your interactive session starting at midnight or + in the wee small hours of the morning. ### Bringing forward the start of an interactive job @@ -262,20 +262,20 @@ nodes. This is left as an exercise for the reader, having regard to the following: - **Time zone:** Even if your environment is set up to use a different -time zone (commonly New Zealand time, which adjusts for daylight -saving as needed), time schedules in the crontab itself are -interpreted in UTC. So if you want something to run at 4:30 p.m. New -Zealand time regardless of the time of year, the cron job will need -to run at 4:30 a.m. UTC (during winter) or 3:30 a.m. UTC (during -summer), and you will need to edit the crontab every six months or -so. + time zone (commonly New Zealand time, which adjusts for daylight + saving as needed), time schedules in the crontab itself are + interpreted in UTC. So if you want something to run at 4:30 p.m. New + Zealand time regardless of the time of year, the cron job will need + to run at 4:30 a.m. UTC (during winter) or 3:30 a.m. UTC (during + summer), and you will need to edit the crontab every six months or + so. - **Weekends:** If you just have a single cron job that postpones -pending interactive jobs until the next day, interactive jobs -pending on a Friday afternoon will be postponed until Saturday -morning, which is probably not what you want. Either your cron job -detects the fact of a Friday and postpones jobs until Monday, or you -have two cron jobs, one that runs on Mondays to Thursdays, and a -different cron job running on Fridays. + pending interactive jobs until the next day, interactive jobs + pending on a Friday afternoon will be postponed until Saturday + morning, which is probably not what you want. Either your cron job + detects the fact of a Friday and postpones jobs until Monday, or you + have two cron jobs, one that runs on Mondays to Thursdays, and a + different cron job running on Fridays. ## Cancelling an interactive session diff --git a/docs/Scientific_Computing/Supported_Applications/ABAQUS.md b/docs/Scientific_Computing/Supported_Applications/ABAQUS.md index 3dc503f3f..2dd6407c8 100644 --- a/docs/Scientific_Computing/Supported_Applications/ABAQUS.md +++ b/docs/Scientific_Computing/Supported_Applications/ABAQUS.md @@ -39,9 +39,9 @@ hyperthreaded CPUs will use twice the number of licence tokens. It may be worth adding  `#SBATCH --hint nomultithread` to your slurm script if licence tokens are your main limiting factor. !!! prerequisite Tips -Required ABAQUS licences can be determined by this simple and -intuitive formula `⌊ 5 x N``0.422`` ⌋` where `N` is number -of CPUs. + Required ABAQUS licences can be determined by this simple and + intuitive formula `⌊ 5 x N``0.422`` ⌋` where `N` is number + of CPUs. You can force ABAQUS to use a specific licence type by setting the parameter `academic=TEACHING` or `academic=RESEARCH` in a relevant @@ -57,15 +57,15 @@ Not all solvers are compatible with all types of parallelisation. | `mp_mode=threads` | ✖ | ✔ | ✔ | ✔ | | `mp_mode=mpi` | ✔ | ✔ | ✖ | ✖ | !!! prerequisite Note -If your input files were created using an older version of ABAQUS you -will need to update them using the command, -``` sl -abaqus -upgrade -job new_job_name -odb old.odb -``` -or -``` sl -abaqus -upgrade -job new_job_name -inp old.inp -``` + If your input files were created using an older version of ABAQUS you + will need to update them using the command, + ``` sl + abaqus -upgrade -job new_job_name -odb old.odb + ``` + or + ``` sl + abaqus -upgrade -job new_job_name -inp old.inp + ```
@@ -192,7 +192,7 @@ class="sourceCode bash"> "abaqus_v6.env" rm "abaqus_v6.env" ``` !!! prerequisite Useful Links -- [Command line options for standard -submission.](https://www.sharcnet.ca/Software/Abaqus610/Documentation/docs/v6.10/books/usb/default.htm?startat=pt01ch03s02abx02.html) - + - [Command line options for standard + submission.](https://www.sharcnet.ca/Software/Abaqus610/Documentation/docs/v6.10/books/usb/default.htm?startat=pt01ch03s02abx02.html) +  ![ABAQUS\_speedup\_SharedVMPI.png](../../assets/images/ABAQUS.png) - +  *Note: Hyperthreading off, testing done on small mechanical FEA model. Results highly model dependant. Do your own tests.* \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/ANSYS.md b/docs/Scientific_Computing/Supported_Applications/ANSYS.md index 044db132f..b0b9f0725 100644 --- a/docs/Scientific_Computing/Supported_Applications/ANSYS.md +++ b/docs/Scientific_Computing/Supported_Applications/ANSYS.md @@ -31,18 +31,18 @@ The three main ANSYS licenses are; - **ANSYS Teaching License **(aa\_t) -This is the default license type, it can be used on up to 6 CPUs on -models with less than 512k nodes + This is the default license type, it can be used on up to 6 CPUs on + models with less than 512k nodes - **ANSYS Research license** (aa\_r) -No node restrictions. Can be used on up to 16 CPUs, for every -additional CPU over 16 you must request additional 'aa\_r\_hpc' -licenses. + No node restrictions. Can be used on up to 16 CPUs, for every + additional CPU over 16 you must request additional 'aa\_r\_hpc' + licenses. -- **ANSYS HPC License** (aa\_r\_hpc)** -**One of these is required for each CPU over 16 when using -a research license. +- **ANSYS HPC License** (aa\_r\_hpc)** + **One of these is required for each CPU over 16 when using + a research license. ## License Order @@ -63,9 +63,9 @@ prefer_research_license prefer_teaching_license ``` !!! prerequisite Note -License preferences are individually tracked by *each version of -ANSYS.* Make sure you set preferences using the same version as in -your script. + License preferences are individually tracked by *each version of + ANSYS.* Make sure you set preferences using the same version as in + your script. ## Journal files @@ -84,7 +84,7 @@ Below is an example of this from a fluent script. #SBATCH --time 01:00:00 # Wall time #SBATCH --mem 512MB # Memory per node #SBATCH --licenses aa_r:1 # One license token per CPU, less 16 -#SBATCH --array 1-100 +#SBATCH --array 1-100 #SBATCH --hint nomultithread # No hyperthreading module load ANSYS/19.2 @@ -119,22 +119,22 @@ jobid=1234567), the file  `fluent_1234567.in` will be created: ; Solve 10 time steps /file/write-case-data testCase1 ok -; Since our output name is the same as our input, we have to provide conformation to overwrite, 'ok' +; Since our output name is the same as our input, we have to provide conformation to overwrite, 'ok' exit yes ; Not including 'exit yes' will cause fluent to exit with an error. (Everything will be fine, but SLURM will read it as FAILED).) ``` -then called as an input `fluent -v3ddp -g -i fluent_1234567.in`, +then called as an input `fluent -v3ddp -g -i fluent_1234567.in`, then deleted `rm fluent_1234567.in` This can be used with variable substitution to great effect as it allows the use of variables in what might otherwise be a fixed input. !!! prerequisite Note -Comments can be added to journal files using a `;`. For example: -``` sl -; This is a comment -``` + Comments can be added to journal files using a `;`. For example: + ``` sl + ; This is a comment + ``` ## Fluent @@ -143,7 +143,7 @@ files](https://docs.hpc.shef.ac.uk/en/latest/referenceinfo/ANSYS/fluent/writing- `fluent -help` for a list of commands. -Must have one of these flags. +Must have one of these flags.  | | | |--------|------------------------------------| @@ -152,7 +152,7 @@ Must have one of these flags. | `2ddp` | 2D solver, double point precision. | | `3ddp` | 3D solver, double point precision. | - + 
@@ -215,8 +215,8 @@ class="sourceCode bash">" yes "" ``` Note, the command must end with two `""` to indicate there are no more -files to add. +files to add.  -As an example +As an example  ``` sl define/user-defined/compiled-functions compile "libudf" yes "myUDF.c" "" "" @@ -443,8 +443,8 @@ class="sourceCode bash"> struct.out || scancel $SLURM_JOBID & + -scport $port -schost $node -scname "$mechsolname" \ + -i "structural.dat" > struct.out || scancel $SLURM_JOBID & cd .. sleep 2 @@ -677,8 +677,8 @@ cd FluidFlow # Run Fluent in the background, alongside the system coupler and ANSYS. fluent 3ddp -g -t$FLUID_CPUS \ --scport=$port -schost=$node -scname="$fluentsolname" \ --i "fluidFlow.jou" > fluent.out || scancel $SLURM_JOBID & + -scport=$port -schost=$node -scname="$fluentsolname" \ + -i "fluidFlow.jou" > fluent.out || scancel $SLURM_JOBID & cd .. # Before exiting, wait for all background tasks (the system coupler, ANSYS and @@ -701,12 +701,12 @@ The following FENSAP solvers are compatible with MPI - C3D - OptiGrid -### Case setup +### Case setup  ### With GUI If you have set up X-11 forwarding, you may launch the FENSAP ice using -the command `fensapiceGUI` from within your FENSAP project directory. +the command `fensapiceGUI` from within your FENSAP project directory. 
@@ -743,23 +743,23 @@ You may close your session and the job will continue to run on the compute nodes. You will be able to view the running job at any time by opening the GUI within the project folder. !!! prerequisite Note -Submitting your job through the use of the GUI has disadvantages and -may not be suitable in all cases. -- Closing the session or losing connection will prevent the next -stage of the job starting (currently executing step will continue -to run).  It is a good idea to launch the GUI inside a tmux/screen -session then send the process to background to avoid this. -- Each individual step will be launched with the same parameters -given in the GUI. -- By default 'restart' is set to disabled. If you wish to continue a -job from a given step/shot you must select so in the dropdown -menu. + Submitting your job through the use of the GUI has disadvantages and + may not be suitable in all cases. + - Closing the session or losing connection will prevent the next + stage of the job starting (currently executing step will continue + to run).  It is a good idea to launch the GUI inside a tmux/screen + session then send the process to background to avoid this. + - Each individual step will be launched with the same parameters + given in the GUI. + - By default 'restart' is set to disabled. If you wish to continue a + job from a given step/shot you must select so in the dropdown + menu. ### Using fensap2slurm Set up your model as you would normally, except rather than starting the run just click 'save'. You *do not* need to set number of CPUs or MPI -configuration. +configuration. Then in your terminal type `fensap2slurm path/to/project` or run `fensap2slurm` from inside the run directory. @@ -771,7 +771,7 @@ last stage of the shot, that way you can set more accurate resource requirements for the remainder. The workflow can then by running `.solvercmd` e.g `bash .solvercmd`. -Progress can be tracked through the GUI as usual. +Progress can be tracked through the GUI as usual.  ## ANSYS-Electromagnetic @@ -809,12 +809,12 @@ All batch options can be listed using ansysedt -batchoptionhelp ``` -(Note, this requires a working X-server) +(Note, this requires a working X-server)  !!! prerequisite Note -Each batch option must have it's own flag, e.g. -``` sl --batchoptions "HFSS/HPCLicenseType=Pool" -batchoptions "Desktop/ProjectDirectory=$PWD" -batchoptions "HFSS/MPIVendor=Intel" -``` + Each batch option must have it's own flag, e.g. + ``` sl + -batchoptions "HFSS/HPCLicenseType=Pool" -batchoptions "Desktop/ProjectDirectory=$PWD" -batchoptions "HFSS/MPIVendor=Intel" + ``` ### Interactive @@ -862,7 +862,7 @@ tasks it launches run on a compute node. This requires using *salloc* instead of *sbatch*, for example: ``` bash -salloc -A nesi99999 -t 30 -n 16 -C avx --mem-per-cpu=512MB bash -c 'module load ANSYS; fluent -v3ddp -t$SLURM_NTASKS' +salloc -A nesi99999 -t 30 -n 16 -C avx --mem-per-cpu=512MB bash -c 'module load ANSYS; fluent -v3ddp -t$SLURM_NTASKS' ``` As with any job, you may have to wait a while before the resource is diff --git a/docs/Scientific_Computing/Supported_Applications/AlphaFold.md b/docs/Scientific_Computing/Supported_Applications/AlphaFold.md index 46b4dfaf1..89ac16989 100644 --- a/docs/Scientific_Computing/Supported_Applications/AlphaFold.md +++ b/docs/Scientific_Computing/Supported_Applications/AlphaFold.md @@ -20,10 +20,10 @@ zendesk_section_id: 360000040076 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Tips -An extended version of AlphaFold2 on NeSI Mahuika cluster which -contains additional information such as visualisation of AlphaFold -outputs, etc [can be found -here](https://nesi.github.io/alphafold2-on-mahuika/) + An extended version of AlphaFold2 on NeSI Mahuika cluster which + contains additional information such as visualisation of AlphaFold + outputs, etc [can be found + here](https://nesi.github.io/alphafold2-on-mahuika/) ## Description @@ -40,7 +40,7 @@ the [Supplementary Information](https://static-content.springer.com/esm/art%3A10.1038%2Fs41586-021-03819-2/MediaObjects/41586_2021_3819_MOESM1_ESM.pdf) for a detailed description of the method. -Home page is at +Home page is at   ## License and Disclaimer @@ -84,20 +84,20 @@ AlphaFold2DB: AlphaFold2DB/2022-06 Description: AlphaFold2 databases -Versions: -AlphaFold2DB/2022-06 -AlphaFold2DB/2023-04 + Versions: + AlphaFold2DB/2022-06 + AlphaFold2DB/2023-04 ``` Loading a module will set the `$AF2DB` variable which is pointing to -the  selected version of the database. For an example. +the  selected version of the database. For an example.  ``` sl $ module load AlphaFold2DB/2023-04 -$ echo $AF2DB +$ echo $AF2DB /opt/nesi/db/alphafold_db/2023-04 ``` @@ -150,9 +150,9 @@ run_alphafold.py --use_gpu_relax \ Input *fasta* used in following example ``` sl -T1083 + T1083 GAMGSEIEHIEEAIANAKTKADHERLVAHYEEEAKRLEKKSEEYQELAKVYKKITDVYPNIRSYMVLHYQNLTRRYKEAAEENRALAKLHHELAIVED -T1084 + T1084 MAAHKGAEHHHKAAEHHEQAAKHHHAAAEHHEKGEHEQAAHHADTAYAHHKHAEEHAAQAAKHDAEHHAPKPH ``` @@ -195,7 +195,7 @@ run_alphafold.py \ ## AlphaFold Singularity container (prior to v2.3.2) If you would like to use a version prior to 2.3.2, It can be done via -the Singularity containers. +the Singularity containers.  We prepared a Singularity container image based on the [official Dockerfile](https://hub.docker.com/r/catgumag/alphafold) with some @@ -213,7 +213,7 @@ modifications. Image (.*simg*) and the corresponding definition file #SBATCH --job-name alphafold2_monomer_example #SBATCH --mem 30G #SBATCH --cpus-per-task 6 -#SBATCH --gpus-per-node P100:1 +#SBATCH --gpus-per-node P100:1 #SBATCH --time 02:00:00 #SBATCH --output slurmout.%j.out @@ -243,7 +243,7 @@ singularity exec --nv /opt/nesi/containers/AlphaFold/alphafold_2.2.0.simg python --fasta_paths=$INPUT/rcsb_pdb_3GKI.fasta ``` - +  #### Multimer @@ -254,7 +254,7 @@ singularity exec --nv /opt/nesi/containers/AlphaFold/alphafold_2.2.0.simg python #SBATCH --job-name alphafold2_monomer_example #SBATCH --mem 30G #SBATCH --cpus-per-task 6 -#SBATCH --gpus-per-node P100:1 +#SBATCH --gpus-per-node P100:1 #SBATCH --time 02:00:00 #SBATCH --output slurmout.%j.out @@ -291,13 +291,13 @@ singularity exec --nv /opt/nesi/containers/AlphaFold/alphafold_2.2.0.simg python #### Explanation of Slurm variables and Singularity flags 1. Values for `--mem` , `--cpus-per-task` and `--time` Slurm variables -are for *3RGK.fasta*. Adjust them accordingly + are for *3RGK.fasta*. Adjust them accordingly 2. We have tested this on both P100 and A100 GPUs where the runtimes -were identical. Therefore, the above example was set to former -via `P100:1` + were identical. Therefore, the above example was set to former + via `P100:1` 3. The `--nv` flag enables GPU support. 4. `--pwd /app/alphafold` is to workaround this [existing -issue](https://github.com/deepmind/alphafold/issues/32) + issue](https://github.com/deepmind/alphafold/issues/32) @@ -311,19 +311,19 @@ Input *fasta* used in following example and subsequent benchmarking is ## Troubleshooting - If you are to encounter the message "*RuntimeError: Resource -exhausted: Out of memory*" , add the following variables to the -slurm script + exhausted: Out of memory*" , add the following variables to the + slurm script -For module based runs +For module based runs  ``` sl export TF_FORCE_UNIFIED_MEMORY=1 export XLA_PYTHON_CLIENT_MEM_FRACTION=4.0 ``` -For Singularity based runs +For Singularity based runs  ``` sl -export SINGULARITYENV_TF_FORCE_UNIFIED_MEMORY=1 +export SINGULARITYENV_TF_FORCE_UNIFIED_MEMORY=1 export SINGULARITYENV_XLA_PYTHON_CLIENT_MEM_FRACTION=4.0 ``` \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/BLAST.md b/docs/Scientific_Computing/Supported_Applications/BLAST.md index 47af79246..d2aca6741 100644 --- a/docs/Scientific_Computing/Supported_Applications/BLAST.md +++ b/docs/Scientific_Computing/Supported_Applications/BLAST.md @@ -24,7 +24,7 @@ zendesk_section_id: 360000040076 - +  ## BLAST Databases @@ -61,7 +61,7 @@ approach first and see if it takes too long. For jobs which need less than 24 CPU-hours, eg: those that use small databases (< 10 GB) or small amounts of query sequence (< 1 GB), or fast BLAST programs such as *blastn* with its default (megablast) -settings. +settings.   ``` bash #!/bin/bash -e @@ -84,7 +84,7 @@ DB=nt #DB=nr $BLASTAPP $BLASTOPTS -db $DB -query $QUERIES -outfmt "$FORMAT" \ --out $QUERIES.$DB.$BLASTAPP -num_threads $SLURM_CPUS_PER_TASK + -out $QUERIES.$DB.$BLASTAPP -num_threads $SLURM_CPUS_PER_TASK ``` ### Multiple threads and local database copy @@ -121,10 +121,11 @@ DB=nt #DB=nr # Keep the database in RAM -cp $BLASTDB/{$DB,taxdb}.* $TMPDIR/ +cp $BLASTDB/{$DB,taxdb}.* $TMPDIR/ export BLASTDB=$TMPDIR $BLASTAPP $BLASTOPTS -db $DB -query $QUERIES -outfmt "$FORMAT" \ --out $QUERIES.$DB.$BLASTAPP -num_threads $SLURM_CPUS_PER_TASK + -out $QUERIES.$DB.$BLASTAPP -num_threads $SLURM_CPUS_PER_TASK ``` +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/BRAKER.md b/docs/Scientific_Computing/Supported_Applications/BRAKER.md index c84e4ddf4..6162b1d8d 100644 --- a/docs/Scientific_Computing/Supported_Applications/BRAKER.md +++ b/docs/Scientific_Computing/Supported_Applications/BRAKER.md @@ -54,7 +54,7 @@ result of the pipeline is the combined gene set of both gene prediction tools, which only contains genes with very high support from extrinsic evidence. - +  Home page : @@ -67,24 +67,24 @@ Artistic License ## Prerequisites -!!! prerequisite Obtain GeneMark-ES/ET Academic License -GeneMark-ES/ET which is one of the dependencies for BRAKER requires an -individual academic license  (this is free). This can be obtained as -below -- Download URL - - - -- ![genemark\_es\_license.png](../../assets/images/BRAKER.png) -- Downloaded filename will be in the format of **gm\_key\_64.gz. ** -- Decompress this file with `gunzip gm_key_64.gz`  and move it to -home directory as  a **hidden** file under the filename `.gm_key` -.i.e. `~/.gm_key` - +!!! prerequisite Obtain GeneMark-ES/ET Academic License  + GeneMark-ES/ET which is one of the dependencies for BRAKER requires an + individual academic license  (this is free). This can be obtained as + below + - Download URL + +   +   + - ![genemark\_es\_license.png](../../assets/images/BRAKER.png) + - Downloaded filename will be in the format of **gm\_key\_64.gz. ** + - Decompress this file with `gunzip gm_key_64.gz`  and move it to + home directory as  a **hidden** file under the filename `.gm_key` +  .i.e. `~/.gm_key` + !!! prerequisite Copy AUGUSTUS config to a path with read/write permissions -Make a copy of AUGUSTUS config from -***/opt/nesi/CS400\_centos7\_bdw/AUGUSTUS/3.4.0-gimkl-2022a/config*** -to path with read/write permissions .i.e. project, nobackup,home + Make a copy of AUGUSTUS config from + ***/opt/nesi/CS400\_centos7\_bdw/AUGUSTUS/3.4.0-gimkl-2022a/config*** +  to path with read/write permissions .i.e. project, nobackup,home  ### Example Slurm scripts @@ -113,7 +113,7 @@ srun braker.pl --threads=${SLURM_CPUS_PER_TASK} --genome=genome.fa --prot_seq=pr ``` This will generate the output directory named **braker** in the current -working directory with content similar to below +working directory with content similar to below  ``` sl augustus.hints.aa braker.gtf genemark_evidence.gff prothint.gff @@ -121,6 +121,6 @@ augustus.hints.codingseq braker.log genemark_hintsfile.gff seed_proteins augustus.hints.gtf cmd.log genome_header.map species/ augustus.hints_iter1.aa errors/ hintsfile.gff uniqueSeeds.gtf augustus.hints_iter1.codingseq evidence.gff hintsfile_iter1.gff what-to-cite.txt -augustus.hints_iter1.gff GeneMark-EP/ prevHints.gff -augustus.hints_iter1.gtf GeneMark-ES/ proteins.fa +augustus.hints_iter1.gff GeneMark-EP/ prevHints.gff +augustus.hints_iter1.gtf GeneMark-ES/ proteins.fa ``` \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/CESM.md b/docs/Scientific_Computing/Supported_Applications/CESM.md index bcc3fa96c..90117aa3f 100644 --- a/docs/Scientific_Computing/Supported_Applications/CESM.md +++ b/docs/Scientific_Computing/Supported_Applications/CESM.md @@ -168,3 +168,4 @@ It involves performing a number of short model runs to determine which components are most expensive and how the individual components scale. That information can then be used to determine an optimal load balance. +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/COMSOL.md b/docs/Scientific_Computing/Supported_Applications/COMSOL.md index 8b598b4eb..0bc2b4bdb 100644 --- a/docs/Scientific_Computing/Supported_Applications/COMSOL.md +++ b/docs/Scientific_Computing/Supported_Applications/COMSOL.md @@ -27,18 +27,18 @@ comsol --help Will display a list of COMSOL batch commands. !!! prerequisite Useful Links -- [Running COMSOL in parallel on -clusters.](https://www.comsol.com/support/knowledgebase/1001/) -- [Running parametric sweeps, batch sweeps, and cluster sweeps from -the command -line.](https://www.comsol.com/support/knowledgebase/1250/) -- [COMSOL and -Multithreading.](https://www.comsol.com/support/knowledgebase/1096/) + - [Running COMSOL in parallel on + clusters.](https://www.comsol.com/support/knowledgebase/1001/) + - [Running parametric sweeps, batch sweeps, and cluster sweeps from + the command + line.](https://www.comsol.com/support/knowledgebase/1250/) + - [COMSOL and + Multithreading.](https://www.comsol.com/support/knowledgebase/1096/) ## Batch Submission When using COMSOL batch the following flags can be used to control -distribution. +distribution.  | | | |-------------------------|----------------------------------------------------------------------------------------------------------------------------------| @@ -136,8 +136,8 @@ class="sourceCode bash">Resource requirements

-COMSOL does not support MPI therefore #SBATCH --ntasks should never -be greater than 1. + COMSOL does not support MPI therefore #SBATCH --ntasks should never + be greater than 1.

-Memory requirements depend on job type, but will scale up with number of CPUs -≈ linearly. + Memory requirements depend on job type, but will scale up with number of CPUs + ≈ linearly.

-Hyper-threading can benefit jobs using less than -8 CPUs, but is not recommended on larger -jobs. + Hyper-threading can benefit jobs using less than + 8 CPUs, but is not recommended on larger + jobs.

-Performance is highly depended on the model used. The above should only be used as a very rough guide. + Performance is highly depended on the model used. The above should only be used as a very rough guide.

-speedup_smoothed.png + speedup_smoothed.png

--> \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/Clair3 .md b/docs/Scientific_Computing/Supported_Applications/Clair3 .md index 8813c2b7d..a78ffe438 100644 --- a/docs/Scientific_Computing/Supported_Applications/Clair3 .md +++ b/docs/Scientific_Computing/Supported_Applications/Clair3 .md @@ -38,7 +38,7 @@ Clair3 is the 3rd generation of A short pre-print describing Clair3's algorithms and results is at [bioRxiv](https://www.biorxiv.org/content/10.1101/2021.12.29.474431v1). - +  ## License and Disclaimer @@ -50,26 +50,26 @@ modification, are permitted provided that the following conditions are met: 1. Re-distributions of source code must retain the above copyright -notice, this list of conditions and the following disclaimer. + notice, this list of conditions and the following disclaimer. 2. Re-distributions in binary form must reproduce the above copyright -notice, this list of conditions and the following disclaimer in the -documentation and/or other materials provided with the distribution. + notice, this list of conditions and the following disclaimer in the + documentation and/or other materials provided with the distribution. 3. Neither the name of the copyright holder nor the names of its -contributors may be used to endorse or promote products derived from -this software without specific prior written permission. + contributors may be used to endorse or promote products derived from + this software without specific prior written permission. ``` sl - +  ``` ### Example Slurm script **Caution**: Absolute path is needed for both `INPUT_DIR` and -`OUTPUT_DIR` - - +`OUTPUT_DIR` + + ``` sl #!/bin/bash -e @@ -102,6 +102,7 @@ run_clair3.sh \ --output=${OUTPUT_DIR} --enable_phasing ``` - - - + + + + \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/Cylc.md b/docs/Scientific_Computing/Supported_Applications/Cylc.md index 6e9c9959a..41a45cbb9 100644 --- a/docs/Scientific_Computing/Supported_Applications/Cylc.md +++ b/docs/Scientific_Computing/Supported_Applications/Cylc.md @@ -45,9 +45,9 @@ documentation](https://cylc.github.io/documentation/) for more elaborate examples, including some with a cycling (repeated) graph pattern. One of the strengths of Cylc is that simple workflows can be executed simply while allowing for very complex workflows, with thousands of tasks, -which may be repeated ad infinitum. - +which may be repeated ad infinitum.  +  ## SSH configuration @@ -58,16 +58,16 @@ cluster without prompting for a passphrase (all HPC nodes see the same filesystem, so this is easy): - run **`ssh-keygen`** to generate a public/private key pair with **no -passphrase** (when it asks for a passphrase, just hit enter) + passphrase** (when it asks for a passphrase, just hit enter) - add your own public key to your authorized keys -file: **`cat .ssh/id_rsa.pub >> .ssh/authorized_keys`** + file: **`cat .ssh/id_rsa.pub >> .ssh/authorized_keys`**  - check that your **keys, authorized\_keys file, ssh -directory, **and** home directory** all have sufficiently secure -file permissions. If not, `ssh` will silently revert to requiring -password entry. See for -example  + directory, **and** home directory** all have sufficiently secure + file permissions. If not, `ssh` will silently revert to requiring + password entry. See for + example  - make sure your **home directory** has a maximum -of **750** permissions + of **750** permissions Now you should be able to run **`ssh mahuika02`**(for example) without being asked for a passphrase. @@ -99,9 +99,9 @@ changed significantly at version 8. ``` sl $ cylc list-versions -7.9.1 +7.9.1 ... -8.0.1 +8.0.1 cylc -> 7.9.1 ``` @@ -130,23 +130,23 @@ Create/edit the following **flow.cylc** file containing ``` sl [scheduling] # Define the tasks and when they should run -[[graph]] -R1 = """ # R1 means run this graph once -taskA & taskB => taskC # Defines the task graph -""" + [[graph]] + R1 = """ # R1 means run this graph once + taskA & taskB => taskC # Defines the task graph + """ [runtime] # Define what each task should run -[[root]] # Default settings inherited by all tasks -platform = mahuika-slurm # Run "cylc conf" to see platforms. -[[[directives]]] # Default SLURM options for the tasks below ---account = nesi99999 # CHANGE -[[taskA]] -script = echo "running task A" -[[[directives]]] # specific SLURM option for this task ---ntasks = 2 -[[taskB]] -script = echo "running task B" -[[taskC]] -script = echo "running task C" + [[root]] # Default settings inherited by all tasks + platform = mahuika-slurm # Run "cylc conf" to see platforms. + [[[directives]]] # Default SLURM options for the tasks below + --account = nesi99999 # CHANGE + [[taskA]] + script = echo "running task A" + [[[directives]]] # specific SLURM option for this task + --ntasks = 2 + [[taskB]] + script = echo "running task B" + [[taskC]] + script = echo "running task C" ``` In the above example, we have three tasks (taskA, taskB and taskC), @@ -161,13 +161,13 @@ to see a list of platforms. The SLURM settings for taskA are in the ## How to interact with Cylc -Cylc takes command lines. Type +Cylc takes command lines. Type  ``` sl $ cylc help all ``` -to see the available commands. Type +to see the available commands. Type  ``` sl $ cylc help install # or cylc install --help @@ -203,7 +203,7 @@ Valid for cylc-8.0.1 ## Looking at the workflow graph -A useful command is +A useful command is  ``` sl $ cylc graph simple @@ -211,7 +211,7 @@ $ cylc graph simple which will generate a png file, generally in the /tmp directory with a name like /tmp/tmpzq3bjktw.PNG. Take note of the name of the png file. -To visualise the file you can type +To visualise the file you can type  ``` sl $ display  /tmp/tmpzq3bjktw.PNG # ADJUST the file name @@ -227,7 +227,7 @@ The "1" indicates that this workflow graph is executed only once. ## Different ways to interact with Cylc Every Cylc action can be executed via the command line. Alternatively, -you can invoke each action through a **terminal user interface** (tui), +you can invoke each action through a **terminal user interface** (tui),  ``` sl $ cylc tui simple @@ -329,7 +329,7 @@ $ cylc cat-log simple//1/taskA # note // between workflow and task ID of the first cycle of taskA. The "1" refers to the task iteration, or cycle point. Our simple workflow only has one iteration (as dictated by -the R1 graph above). +the R1 graph above).  ## How to clean or remove a workflow diff --git a/docs/Scientific_Computing/Supported_Applications/Delft3D.md b/docs/Scientific_Computing/Supported_Applications/Delft3D.md index d3fe3743f..44b101a3d 100644 --- a/docs/Scientific_Computing/Supported_Applications/Delft3D.md +++ b/docs/Scientific_Computing/Supported_Applications/Delft3D.md @@ -106,5 +106,5 @@ class="sourceCode bash">
@@ -89,7 +89,7 @@ the function of interest. Please also note that there are some inconsistencies between Picard and GATK flag naming conventions, so it is best to double check them. - +  ## Common Issues @@ -111,10 +111,10 @@ TMPDIR="/nesi/nobackup//GATK_tmp/" mkdir -p ${TMPDIR} # put this line in AFTER you load GATK but BEFORE running GATK -export _JAVA_OPTIONS=-Djava.io.tmpdir=${TMPDIR} +export _JAVA_OPTIONS=-Djava.io.tmpdir=${TMPDIR} ``` - +  ### File is not a supported reference file type @@ -123,7 +123,8 @@ one of the log files. It appears that sometimes GATK requires the file extension of "fasta" or "fa", for fasta files. Please make sure your file extensions correctly reflect the file type. +  +  - - +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/GROMACS.md b/docs/Scientific_Computing/Supported_Applications/GROMACS.md index 79a83fdd6..19e752ac0 100644 --- a/docs/Scientific_Computing/Supported_Applications/GROMACS.md +++ b/docs/Scientific_Computing/Supported_Applications/GROMACS.md @@ -66,7 +66,7 @@ node at the same time. module load GROMACS/5.1.4-intel-2017a -# Prepare the binary input from precursor files +# Prepare the binary input from precursor files srun -n 1 gmx grompp -v -f minim.mdp -c protein.gro -p protein.top -o protein-EM-vacuum.tpr # Run the simulation @@ -95,9 +95,9 @@ checkpoint file using the `-cpi` flag, thus: `-cpi state.cpt`. If you run GROMACS on a node that is simultaneously running other jobs (even other GROMACS jobs), you may see warnings like this in your output: -WARNING: In MPI process #0: Affinity setting failed. This can cause -performance degradation! If you think your setting are correct, -contact the GROMACS developers. + WARNING: In MPI process #0: Affinity setting failed. This can cause + performance degradation! If you think your setting are correct, + contact the GROMACS developers. One way to prevent these warnings, which is also useful for reducing the risk of inefficient CPU usage, is to request entire nodes. On the @@ -105,22 +105,22 @@ Mahuika cluster, this can be done using the following lines in your input, altered as appropriate: - Using MPI parallelisation and hyperthreading, but no OpenMP -parallelisation: + parallelisation: ``` bash #SBATCH --nodes 4 # May vary #SBATCH --ntasks-per-node 72 # Must be 72 -# (the number of logical cores per node) + # (the number of logical cores per node) #SBATCH --cpus-per-task 1 # Must be 1 ``` - Using MPI parallelisation with neither hyperthreading nor OpenMP -parallelisation: + parallelisation: ``` bash #SBATCH --nodes 4 # May vary #SBATCH --ntasks-per-node 36 # Must be 36 -# (the number of physical cores per node) + # (the number of physical cores per node) #SBATCH --cpus-per-task 1 # Must be 1 #SBATCH --hint=nomultithread   # Don't use hyperthreading ``` @@ -131,7 +131,7 @@ parallelisation: #SBATCH --nodes 4 # May vary #SBATCH --ntasks-per-node 1 # Must be 1 #SBATCH --cpus-per-task 72 # Must be 72 -# (the number of logical cores per node) + # (the number of logical cores per node) ``` - Using hybrid (OpenMP + MPI) parallelisation but not hyperthreading: @@ -140,7 +140,7 @@ parallelisation: #SBATCH --nodes 4 # May vary #SBATCH --ntasks-per-node 1 # Must be 1 #SBATCH --cpus-per-task 36 # Must be 36 -# (the number of physical cores per node) + # (the number of physical cores per node) #SBATCH --hint=nomultithread # Don't use hyperthreading ``` @@ -151,7 +151,7 @@ by using `-ntomp ${SLURM_CPUS_PER_TASK}`. Hybrid parallelisation can be more efficient than MPI-only parallelisation, as within the same node there is no need for inter-task communication. - +  **NOTE** on using GROMACS on Māui: diff --git a/docs/Scientific_Computing/Supported_Applications/Gaussian.md b/docs/Scientific_Computing/Supported_Applications/Gaussian.md index 5b6357e04..d39c40f8c 100644 --- a/docs/Scientific_Computing/Supported_Applications/Gaussian.md +++ b/docs/Scientific_Computing/Supported_Applications/Gaussian.md @@ -110,9 +110,9 @@ gjf_template="${system}.gjf.template" # Prepare a job-specific nobackup directory and set GAUSS_SCRDIR accordingly if [[ -n "${SLURM_ARRAY_TASK_COUNT}" && "${SLURM_ARRAY_TASK_COUNT}" -gt 1 ]] then -job_code="${SLURM_ARRAY_JOB_ID}_${SLURM_ARRAY_TASK_ID}" + job_code="${SLURM_ARRAY_JOB_ID}_${SLURM_ARRAY_TASK_ID}" else -job_code="${SLURM_JOB_ID}" + job_code="${SLURM_JOB_ID}" fi export GAUSS_SCRDIR="/nesi/nobackup/${SLURM_JOB_ACCOUNT}/mahuika_job_${job_code}" /usr/bin/mkdir -p "${GAUSS_SCRDIR}" @@ -120,27 +120,27 @@ export GAUSS_SCRDIR="/nesi/nobackup/${SLURM_JOB_ACCOUNT}/mahuika_job_${job_code} # Calculate the number of CPUs to use within Gaussian if [[ -n "${SLURM_CPUS_PER_TASK}" ]] then -gaussian_ncpus="${SLURM_CPUS_PER_TASK}" + gaussian_ncpus="${SLURM_CPUS_PER_TASK}" else -gaussian_ncpus=1 + gaussian_ncpus=1 fi # Calculate the amount of memory to use within Gaussian # That is, amount of memory requested of Slurm minus 2 GB if [[ -n "${SLURM_MEM_PER_NODE}" && "${SLURM_MEM_PER_NODE}" -ge 4096 ]] then -gaussian_memory=$((${SLURM_MEM_PER_NODE} - 2048)) + gaussian_memory=$((${SLURM_MEM_PER_NODE} - 2048)) else -/usr/bin/echo "Error: Not enough RAM requested (${SLURM_MEM_PER_NODE})." >&2 -/usr/bin/echo " Please set \"#SBATCH --mem\" to at least 4096 MB." >&2 -exit 2 + /usr/bin/echo "Error: Not enough RAM requested (${SLURM_MEM_PER_NODE})." >&2 + /usr/bin/echo " Please set \"#SBATCH --mem\" to at least 4096 MB." >&2 + exit 2 fi gjf_working_copy="${GAUSS_SCRDIR}/${system}.gjf" gaussian_checkpoint="${GAUSS_SCRDIR}/${system}.chk" /usr/bin/sed -e "s/<>/${gaussian_ncpus}/" "${gjf_template}" | \ -/usr/bin/sed -e "s/<>/${gaussian_memory}/" | \ -/usr/bin/sed -e "s:<>:${gaussian_checkpoint}:" > "${gjf_working_copy}" + /usr/bin/sed -e "s/<>/${gaussian_memory}/" | \ + /usr/bin/sed -e "s:<>:${gaussian_checkpoint}:" > "${gjf_working_copy}" srun g09 < "${gjf_working_copy}" ``` @@ -182,13 +182,13 @@ submission script. The key properties are `%NProcShared` and `%Mem`: - `%NProcShared` should be set to the number of CPU cores you intend -to use, matching the value of the `-c` or `--cpus-per-task` -directive in the Slurm job file. + to use, matching the value of the `-c` or `--cpus-per-task` + directive in the Slurm job file. - `%Mem` should be set to the amount of memory you intend to use. It -should be about 2 GB (2,048 MB) less than the value of `--mem` in -the Slurm job submission script. Note that `--mem` is interpreted as -being in MB rather than GB unless otherwise specified (i.e., with a -"G" on the end). + should be about 2 GB (2,048 MB) less than the value of `--mem` in + the Slurm job submission script. Note that `--mem` is interpreted as + being in MB rather than GB unless otherwise specified (i.e., with a + "G" on the end). If you use the example Slurm script and template gjf file provided above (with appropriate modifications for your chemical system and desired diff --git a/docs/Scientific_Computing/Supported_Applications/Java.md b/docs/Scientific_Computing/Supported_Applications/Java.md index 96fcfe47b..0735810d7 100644 --- a/docs/Scientific_Computing/Supported_Applications/Java.md +++ b/docs/Scientific_Computing/Supported_Applications/Java.md @@ -75,7 +75,7 @@ Java us the \`module\` command to find and load for example: $ module spider Java ----------------------------------------------------------------------- ----------------------------------------------------------------------- -Java Platform, Standard Edition (Java SE) lets you develop and deploy +Java Platform, Standard Edition (Java SE) lets you develop and deploy Java applications on desktops and servers. Versions: @@ -107,12 +107,13 @@ given the option `-Djava.io.tmpdir=$TMPDIR.`  TMPDIR is automatically removed at the end of the job. - If you run your Java program by using the `java` command, that is in -a form like -`java java.program `, you -can specify the tmpdir as follows: -`java -Djava.io.tmpdir=$TMPDIR java.program `. + a form like + `java java.program `, you + can specify the tmpdir as follows: + `java -Djava.io.tmpdir=$TMPDIR java.program `. - If your Java program is called indirectly, or is pre-wrapped, you -will need to put the following line in your job submission script -before calling the Java program: -`export _JAVA_OPTIONS=-Djava.io.tmpdir=${TMPDIR}`. + will need to put the following line in your job submission script + before calling the Java program: + `export _JAVA_OPTIONS=-Djava.io.tmpdir=${TMPDIR}`. +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/Julia.md b/docs/Scientific_Computing/Supported_Applications/Julia.md index 481b321fb..9610c6ed1 100644 --- a/docs/Scientific_Computing/Supported_Applications/Julia.md +++ b/docs/Scientific_Computing/Supported_Applications/Julia.md @@ -59,87 +59,87 @@ the Julia command line. In this documentation, we will assume you are using the command line, but the commands are the same within a script. 1. Load the environment module (not the same as a Julia module) -corresponding to the version of Julia you want to use, e.g. Julia -1.1.0: + corresponding to the version of Julia you want to use, e.g. Julia + 1.1.0: -``` sl -$ module load Julia/1.1.0 -``` + ``` sl + $ module load Julia/1.1.0 + ``` 2. Launch the Julia executable: -``` sl -# Use Julia interactively -$ julia -# Alternatively, use a Julia script -$ julia script.jl -``` + ``` sl + # Use Julia interactively + $ julia + # Alternatively, use a Julia script + $ julia script.jl + ``` 3. If you have opened Julia interactively, you should now see a Julia -welcome message and prompt, like the following. + welcome message and prompt, like the following. -``` sl -_ -_ _ _(_)_ | Documentation: https://docs.julialang.org -(_) | (_) (_) | -_ _ _| |_ __ _ | Type "?" for help, "]?" for Pkg help. -| | | | | | |/ _` | | -| | |_| | | | (_| | | Version 1.1.0 (2019-01-21) -_/ |\__'_|_|_|\__'_| | Official https://julialang.org/ release -|__/ | - -julia> -``` + ``` sl + _ + _ _ _(_)_ | Documentation: https://docs.julialang.org + (_) | (_) (_) | + _ _ _| |_ __ _ | Type "?" for help, "]?" for Pkg help. + | | | | | | |/ _` | | + | | |_| | | | (_| | | Version 1.1.0 (2019-01-21) + _/ |\__'_|_|_|\__'_| | Official https://julialang.org/ release + |__/ | + + julia> + ``` 4. Load the Julia package manager: -``` sl -julia> using Pkg -``` + ``` sl + julia> using Pkg + ``` 5. The most important variable for installing packages is called -`DEPOT_PATH`. The depot path is a series of directories that will be -searched, in order, for the package that you wish to install and its -dependencies. Clear the depot path. + `DEPOT_PATH`. The depot path is a series of directories that will be + searched, in order, for the package that you wish to install and its + dependencies. Clear the depot path. !!! prerequisite Warning -It is possible for a package to be installed somewhere on -`DEPOT_PATH`, but not compiled. If this happens, and the package -is a dependency of what you're trying to install, Julia will try -to compile it in situ. This is a bad thing most of the time, -because you're unlikely to have write access to the install -location, so the compilation will fail. Hence why clearing the -depot path is important. - -``` sl -julia> empty!(DEPOT_PATH) -``` + It is possible for a package to be installed somewhere on + `DEPOT_PATH`, but not compiled. If this happens, and the package + is a dependency of what you're trying to install, Julia will try + to compile it in situ. This is a bad thing most of the time, + because you're unlikely to have write access to the install + location, so the compilation will fail. Hence why clearing the + depot path is important. + + ``` sl + julia> empty!(DEPOT_PATH) + ``` 6. Add your preferred Julia package directory to the newly empty depot -path. + path. -``` sl -julia> push!(DEPOT_PATH, "/nesi/project/nesi12345/julia") -``` + ``` sl + julia> push!(DEPOT_PATH, "/nesi/project/nesi12345/julia") + ``` !!! prerequisite Tip -While a conventional personal Julia package directory is -`/home/joe.bloggs/.julia` or similar, there is no reason for the -directory to be within any particular user's home directory, or -for it to be a hidden directory with a name starting with a dot. -For shared Julia package directories, a visible directory within a -project directory will probably be more useful to you and your -colleagues. -In any case, for obvious reasons, you should choose a directory to -which you have write access. + While a conventional personal Julia package directory is + `/home/joe.bloggs/.julia` or similar, there is no reason for the + directory to be within any particular user's home directory, or + for it to be a hidden directory with a name starting with a dot. + For shared Julia package directories, a visible directory within a + project directory will probably be more useful to you and your + colleagues. + In any case, for obvious reasons, you should choose a directory to + which you have write access. 7. Install the desired Julia package. In this case, we are showing the -machine-learning package Flux as an example. + machine-learning package Flux as an example. -``` sl -julia> Pkg.add("Flux") -``` + ``` sl + julia> Pkg.add("Flux") + ``` -Julia should chug away for a while, downloading and compiling -various packages into the chosen directory. + Julia should chug away for a while, downloading and compiling + various packages into the chosen directory. ### Making Julia packages available at runtime @@ -153,11 +153,11 @@ On NeSI, the default contents of `LOAD_PATH` are as follows: ``` sl julia> LOAD_PATH 5-element Array{String,1}: -"@" -"@v#.#" -"@stdlib" -"/opt/nesi/mahuika/Julia/1.1.0/local/share/julia/environment/v1.1" -"." + "@" + "@v#.#" + "@stdlib" + "/opt/nesi/mahuika/Julia/1.1.0/local/share/julia/environment/v1.1" + "." ``` The first three elements are special entries, while the fourth element @@ -172,22 +172,22 @@ certainly the easiest is to do the following in your environment: $ export JULIA_LOAD_PATH="/nesi/project/nesi12345/julia:${JULIA_LOAD_PATH}" ``` !!! prerequisite Tip -By prepending the directory to `JULIA_LOAD_PATH` instead of appending -it, you ensure that your project's versions of Julia packages are used -by default, in preference to whatever might be managed centrally. This -is probably what you want to do. If you want to use the centrally -managed versions of Julia packages first and only use your project's -package if there isn't a centrally managed instance, you can append it -instead: -``` sl -$ export JULIA_LOAD_PATH=${JULIA_LOAD_PATH}:/nesi/project/nesi12345/julia" -``` + By prepending the directory to `JULIA_LOAD_PATH` instead of appending + it, you ensure that your project's versions of Julia packages are used + by default, in preference to whatever might be managed centrally. This + is probably what you want to do. If you want to use the centrally + managed versions of Julia packages first and only use your project's + package if there isn't a centrally managed instance, you can append it + instead: + ``` sl + $ export JULIA_LOAD_PATH=${JULIA_LOAD_PATH}:/nesi/project/nesi12345/julia" + ``` !!! prerequisite Tip -To revert to the default load path, just unset `JULIA_LOAD_PATH`: -``` sl -$ unset JULIA_LOAD_PATH -$ export JULIA_LOAD_PATH -``` + To revert to the default load path, just unset `JULIA_LOAD_PATH`: + ``` sl + $ unset JULIA_LOAD_PATH + $ export JULIA_LOAD_PATH + ``` ## Profiling Julia code @@ -204,29 +204,29 @@ In order to collect profiling data with VTune you should: - load a "-VTune" variant of Julia, for example: -``` sl -module load Julia/1.2.0-gimkl-2018b-VTune -``` + ``` sl + module load Julia/1.2.0-gimkl-2018b-VTune + ``` - load a VTune module: -``` sl -module load VTune -``` + ``` sl + module load VTune + ``` - enable Julia VTune profiling by setting an environment variable: -``` sl -export ENABLE_JITPROFILING=1 -``` + ``` sl + export ENABLE_JITPROFILING=1 + ``` - prepend the usual command that you use to run your Julia program -with the desired VTune command, for example to run a hotspots -analysis: + with the desired VTune command, for example to run a hotspots + analysis: -``` sl -srun amplxe-cl -collect hotspots -- julia your_program.jl -``` + ``` sl + srun amplxe-cl -collect hotspots -- julia your_program.jl + ``` VTune will create a result directory which contains the profiling information. This result can be loaded using the VTune GUI, assuming you @@ -236,5 +236,5 @@ have X11 forwarding enabled: amplxe-gui --path-to-open ``` -Additional information about VTune can be found in the [User + Additional information about VTune can be found in the [User Guide](https://software.intel.com/en-us/vtune-amplifier-help). \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/JupyterLab.md b/docs/Scientific_Computing/Supported_Applications/JupyterLab.md index 361c37c5c..8bdc9b2d2 100644 --- a/docs/Scientific_Computing/Supported_Applications/JupyterLab.md +++ b/docs/Scientific_Computing/Supported_Applications/JupyterLab.md @@ -20,11 +20,11 @@ zendesk_section_id: 360000040076 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Note -This documentation contains our legacy instructions for running -JupyterLab by tunnelling through the lander node. -[If you are a Mahuika cluster user, we recommend using jupyter via -jupyter.nesi.org.nz. Follow this link for more -information](https://support.nesi.org.nz/hc/en-gb/articles/360001555615) + This documentation contains our legacy instructions for running + JupyterLab by tunnelling through the lander node. + [If you are a Mahuika cluster user, we recommend using jupyter via  + jupyter.nesi.org.nz. Follow this link for more + information](https://support.nesi.org.nz/hc/en-gb/articles/360001555615) NeSI provides a service for working on Jupyter Notebooks. As a first step JupyterLab can be used on Mahuika nodes. JupyterLab is a @@ -38,19 +38,19 @@ procedure will be simplified in future, but now require the following steps, which are then described in more details: - [Launch JupyterLab](#h_a0e4107a-358d-4db6-a7a4-c2c3273c74ed) -- [Connect to the NeSI system to establish SSH port -forwarding ](#h_22b17d98-8054-4898-871e-38a42a2e3849) -- [SSH Command Line](#h_892370eb-662a-4480-9ae4-b56fd64eb7d0) -OR -- [MobaXterm GUI](#h_cc633523-5df0-4f24-a460-391ced9a0316) -- open another session to the NeSI system -- [Launch the JupyterLab -server](#h_a46369a1-5f2c-4ed8-82c2-f06c0c1d58b4) -- [on login nodes / virtual -labs](#h_fca84ce8-3167-4c14-a128-23049417a5dd) OR -- [on compute nodes](#h_6cb2d7b4-f63c-49ed-ba73-f58fd903d86d) -- [Launch JupyterLab in your local -browser](#h_22b17d98-8054-4898-871e-38a42a2e3849) + - [Connect to the NeSI system to establish SSH port + forwarding ](#h_22b17d98-8054-4898-871e-38a42a2e3849) + - [SSH Command Line](#h_892370eb-662a-4480-9ae4-b56fd64eb7d0) + OR + - [MobaXterm GUI](#h_cc633523-5df0-4f24-a460-391ced9a0316) + - open another session to the NeSI system + - [Launch the JupyterLab + server](#h_a46369a1-5f2c-4ed8-82c2-f06c0c1d58b4) + - [on login nodes / virtual + labs](#h_fca84ce8-3167-4c14-a128-23049417a5dd) OR + - [on compute nodes](#h_6cb2d7b4-f63c-49ed-ba73-f58fd903d86d) + - [Launch JupyterLab in your local + browser](#h_22b17d98-8054-4898-871e-38a42a2e3849) - [Kernels](#h_e7f80560-91c0-420a-bccb-17bbf8c2e916) - [Packages](#h_04f2f4e2-8e7a-486d-aea5-e020eb9df66e) @@ -68,12 +68,12 @@ This number needs to be used while establishing the port forwarding and while launching JupyterLab. In the following we use the port number 15051 (**please select another number**). -### Setup SSH port forwarding +### Setup SSH port forwarding  !!! prerequisite Requirements -- In the following we assume you already configured -your`.ssh/config` to use two hop method as described in the -[Standard Terminal -Setup](https://support.nesi.org.nz/hc/en-gb/articles/360000625535). + - In the following we assume you already configured + your`.ssh/config` to use two hop method as described in the + [Standard Terminal + Setup](https://support.nesi.org.nz/hc/en-gb/articles/360000625535). First, the port forwarding needs to be enabled between your local machine and the NeSI system. Therewith a local port will be connected to @@ -94,49 +94,49 @@ ssh -N -L 15051:localhost:15051 mahuika Here -N means "Do not execute a remote command" and -L means "Forward Local Port". !!! prerequisite Tips -- For Maui\_Ancil, e.g. w-mauivlab01 you may want to add the -following to your `.ssh/config` to avoid establishing the -additional hop manually. -``` sl -Host maui_vlab -User -Hostname w-mauivlab01.maui.niwa.co.nz -ProxyCommand ssh -W %h:%p maui -ForwardX11 yes -ForwardX11Trusted yes -ServerAliveInterval 300 -ServerAliveCountMax 2 -``` -<username> needs to be changed. Hostnames can be adapted for -other nodes, e.g. `w-clim01` + - For Maui\_Ancil, e.g. w-mauivlab01 you may want to add the + following to your `.ssh/config` to avoid establishing the + additional hop manually. + ``` sl + Host maui_vlab + User + Hostname w-mauivlab01.maui.niwa.co.nz + ProxyCommand ssh -W %h:%p maui + ForwardX11 yes + ForwardX11Trusted yes + ServerAliveInterval 300 + ServerAliveCountMax 2 + ``` + <username> needs to be changed. Hostnames can be adapted for + other nodes, e.g. `w-clim01` #### MobaXterm GUI !!! prerequisite Tips -- MobaXterm has an internal terminal which acts like a linux -terminal and can be configured as described in the [Standard -Terminal -Setup](https://support.nesi.org.nz/hc/en-gb/articles/360000625535). -Therewith the [SSH command -line](#h_892370eb-662a-4480-9ae4-b56fd64eb7d0) approach above can -be used. - + - MobaXterm has an internal terminal which acts like a linux + terminal and can be configured as described in the [Standard + Terminal + Setup](https://support.nesi.org.nz/hc/en-gb/articles/360000625535). + Therewith the [SSH command + line](#h_892370eb-662a-4480-9ae4-b56fd64eb7d0) approach above can + be used. +  MobaXterm has a GUI to setup and launch sessions with port forwarding, click 'Tools > MobaSSH Thunnel (port forwarding)': - specify the lander.nesi.org.nz as SSH server address (right, lower -box, first line) + box, first line) - specify your user name (right, lower box, second line) -- specify the remote server address, e.g. login.mahuika.nesi.org.nz -(right, upper box first line) +- specify the remote server address, e.g. login.mahuika.nesi.org.nz  + (right, upper box first line) - specify the JupyterLab port number on the local side (left) and at -the remote server (right upper box, second line) + the remote server (right upper box, second line) - Save ![sshTunnel.PNG](../../assets/images/JupyterLab.PNG) -### Launch the JupyterLab server +### Launch the JupyterLab server  After successfully establishing the port forwarding, we need open another terminal and login to the NeSI system in the usual way, e.g. @@ -170,7 +170,7 @@ node](#h_6cb2d7b4-f63c-49ed-ba73-f58fd903d86d). #### On login nodes / virtual labs For very small (computational cheap and small memory) the JupyterLab can -be started on the login or virtual lab using: +be started on the login or virtual lab using:  ``` sl jupyter lab --port 15051 --no-browser @@ -201,10 +201,10 @@ similar to: ``` sl ... [C 14:03:19.911 LabApp] -To access the notebook, open this file in a browser: -file:///scale_wlg_persistent/filesets/project/nesi99996/.local/share/jupyter/runtime/nbserver-503-open.html -Or copy and paste one of these URLs: -http://localhost:15051/?token=d122855ebf4d029f2bfabb0da03ae01263972d7d830d79c4 + To access the notebook, open this file in a browser: + file:///scale_wlg_persistent/filesets/project/nesi99996/.local/share/jupyter/runtime/nbserver-503-open.html + Or copy and paste one of these URLs: + http://localhost:15051/?token=d122855ebf4d029f2bfabb0da03ae01263972d7d830d79c4 ``` The last line will be needed in the browser later. @@ -220,7 +220,7 @@ os.open('hostname').read() More resources can be requested, e.g. by using: ``` sl -srun --ntasks 1 -t 60 --cpus-per-task 5 --mem 512MB jupyter-compute 15051 +srun --ntasks 1 -t 60 --cpus-per-task 5 --mem 512MB jupyter-compute 15051 ``` Where 5 cores are requested for threading and a total memory of 3GB. @@ -230,7 +230,7 @@ libraries, which implement threading align the numbers of threads (often called jobs) to the selected number of cores (otherwise the performance will be affected). -### JupyterLab in your local browser +### JupyterLab in your local browser  Finally, you need to open your local web browser and copy and paste the URL specified by the JupyterLab server into the address bar. After @@ -243,7 +243,7 @@ initializing Jupyter Lab you should see a page similar to: The following JupyterLab kernel are installed: - Python3 -- R +- R  - Spark ### R @@ -268,7 +268,7 @@ module load Spark There are a long list of default packages provided by the JupyterLab environment module (list all using `!pip list`) and R (list using -`installed.packages(.Library)`, note the list is shortened). +`installed.packages(.Library)`, note the list is shortened).  Furthermore, you can install additional packages as described on the [Python](https://support.nesi.org.nz/hc/en-gb/articles/207782537) and diff --git a/docs/Scientific_Computing/Supported_Applications/Keras.md b/docs/Scientific_Computing/Supported_Applications/Keras.md index b751ff995..655b71a6e 100644 --- a/docs/Scientific_Computing/Supported_Applications/Keras.md +++ b/docs/Scientific_Computing/Supported_Applications/Keras.md @@ -24,7 +24,7 @@ Python. Keras is included with TensorFlow. Note that there are [CPU and](https://support.nesi.org.nz/hc/en-gb/articles/360000997675-TensorFlow-on-CPUs) [GPU versions](https://support.nesi.org.nz/hc/en-gb/articles/360000990436-TensorFlow) of TensorFlow, here we'll use TensorFlow 1.10 for GPUs, which is available -as an environment module. +as an environment module.  Keras can be used to solve a wide set of problems using artificial neural networks, including pattern recognition. Ultimately, a neural @@ -34,7 +34,7 @@ neurons, which are modelled after biological neurons. The connections between neurons have different "weights", which when submitted to different stimuli will output different signals. With sufficient training, we can teach a neural network to acquire the correct weights, -i.e. adjust the weights until the desired output is produced. +i.e. adjust the weights until the desired output is produced.  ## Counting dots in images @@ -95,19 +95,19 @@ corresponding lines in classify.py look like (Python code): ``` sl clf = keras.Sequential() clf.add( keras.layers.Conv2D(32, kernel_size=(3,3), strides=(1,1), -padding='same', data_format='channels_last', activation='relu') ) +                             padding='same', data_format='channels_last', activation='relu') ) clf.add( keras.layers.MaxPooling2D(pool_size=(2, 2)) ) clf.add( keras.layers.Conv2D(128, kernel_size=(3,3), strides=(1,1), -padding='same', data_format='channels_last', activation='relu') ) +                             padding='same', data_format='channels_last', activation='relu') ) clf.add( keras.layers.MaxPooling2D(pool_size=(2, 2)) ) clf.add( keras.layers.Conv2D(256, kernel_size=(3,3), strides=(1,1), -padding='same', data_format='channels_last', activation='relu') ) +                             padding='same', data_format='channels_last', activation='relu') ) clf.add( keras.layers.MaxPooling2D(pool_size=(2, 2)) ) clf.add( keras.layers.Flatten() ) clf.add( keras.layers.Dense(1) ) ``` - +  We're now ready to train and test our model: @@ -136,7 +136,7 @@ sbatch classify.sl Upon completion of the run, expect to find file someResults.png in the same directory as classify.py. This file contains the predictions for the first 50 test images, which will vary for each training but the -result will look like: +result will look like:  ![someResults.png](../../assets/images/Keras.png) @@ -146,11 +146,12 @@ inferred values are to be rounded to the nearest integer. Plot titles in red indicate failures. Among the 100 test images, the correct number of dots was found in 90 percent of the cases (the accuracy will change with each training due to the randomness of the process). The predicted -number of dots should be off by no more than one unit in most cases. - - - +number of dots should be off by no more than one unit in most cases.  +  +  +  +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/Lambda Stack.md b/docs/Scientific_Computing/Supported_Applications/Lambda Stack.md index e334eed4f..d03bcba4c 100644 --- a/docs/Scientific_Computing/Supported_Applications/Lambda Stack.md +++ b/docs/Scientific_Computing/Supported_Applications/Lambda Stack.md @@ -102,7 +102,7 @@ export SINGULARITY_CACHEDIR=/path/to/somewhere/else/with/lots/of/space singularity build lambda-stack-focal-$(date +%Y%m%d).sif docker-daemon:lambda-stack:20.04 ``` - +  ## Lambda Stack via Slurm @@ -125,7 +125,7 @@ module purge module load Singularity # for convenience store the singularity command in an environment variable -# feel free to add additional binds if you need them +# feel free to add additional binds if you need them SINGULARITY="singularity exec --nv -B ${PWD} ${SIF}" # run a command in the container @@ -151,7 +151,7 @@ export SIF=/opt/nesi/containers/lambda-stack/lambda-stack-focal-latest.sif # create a jupyter kernel using the Python within the Singularity image singularity exec -B $HOME $SIF python -m ipykernel install --user \ ---name lambdastack --display-name="Lambda Stack Python 3" + --name lambdastack --display-name="Lambda Stack Python 3" ``` If successful this should report that a kernelspec has been installed. @@ -179,7 +179,7 @@ module load Singularity homefull=$(readlink -e $HOME) # for convenience store the singularity command in an environment variable -# feel free to add additional binds if you need them +# feel free to add additional binds if you need them SINGULARITY="singularity exec --nv -B ${HOME},${homefull},${PWD} ${SIF}" # run a command in the container @@ -278,12 +278,12 @@ BENCH_SCRIPT=transformers/examples/pytorch/benchmarking/run_benchmark.py # run the benchmarks python ${BENCH_SCRIPT} --no_multi_process --training --no_memory \ ---save_to_csv --env_print \ ---models bert-base-cased bert-large-cased \ -bert-large-uncased gpt2 \ -gpt2-large gpt2-xl \ ---batch_sizes 8 \ ---sequence_lengths 8 32 128 512 + --save_to_csv --env_print \ + --models bert-base-cased bert-large-cased \ + bert-large-uncased gpt2 \ + gpt2-large gpt2-xl \ + --batch_sizes 8 \ + --sequence_lengths 8 32 128 512 ``` Now create a Slurm script that will launch the job, names @@ -322,7 +322,8 @@ Submit this job to Slurm and then wait for the benchmarks to run: sbatch run-benchmark-torch.sl ``` +  +  - - +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/MAKER.md b/docs/Scientific_Computing/Supported_Applications/MAKER.md index 7abe2a042..43628c636 100644 --- a/docs/Scientific_Computing/Supported_Applications/MAKER.md +++ b/docs/Scientific_Computing/Supported_Applications/MAKER.md @@ -23,7 +23,7 @@ zendesk_section_id: 360000040076 Since the MAKER control file *maker\_exe.ctl* is just an annoyance in an environment module based system we have patched MAKER to make that -optional. If it is absent then the defaults will be used directly. +optional. If it is absent then the defaults will be used directly.  ## Parallelism @@ -44,9 +44,10 @@ srun maker -q ## Resources MAKER creates many files in its output, sometimes hundreds of thousands. -There is a risk that you exhaust your quota of inodes, so: + There is a risk that you exhaust your quota of inodes, so: - Don't run too many MAKER jobs simultaneously. - Delete unneeded output files promptly after MAKER finishes.  You can -of course use `nn_archive_files` or `tar` to archive them first. + of course use `nn_archive_files` or `tar` to archive them first. +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/MATLAB.md b/docs/Scientific_Computing/Supported_Applications/MATLAB.md index a1fef4a24..4221e7200 100644 --- a/docs/Scientific_Computing/Supported_Applications/MATLAB.md +++ b/docs/Scientific_Computing/Supported_Applications/MATLAB.md @@ -25,28 +25,28 @@ zendesk_section_id: 360000040076 !!! prerequisite No Licence? -If you want to run MATLAB code on the cluster, but are not a member of -an institution without access to floating licences, MATLAB code can -still be run on the cluster using MCR. + If you want to run MATLAB code on the cluster, but are not a member of + an institution without access to floating licences, MATLAB code can + still be run on the cluster using MCR. ## Example script !!! prerequisite Note -When developing MATLAB code on your local machine, take measures to -ensure it will be platform independent.  Use relative paths when -possible and not avoid using '\\s see -[here](https://www.mathworks.com/help/matlab/ref/fullfile.html). + When developing MATLAB code on your local machine, take measures to + ensure it will be platform independent.  Use relative paths when + possible and not avoid using '\\s see + [here](https://www.mathworks.com/help/matlab/ref/fullfile.html). ### Script Example ``` bash #!/bin/bash -e -#SBATCH --job-name MATLAB_job # Name to appear in squeue -#SBATCH --time 01:00:00 # Max walltime +#SBATCH --job-name MATLAB_job # Name to appear in squeue +#SBATCH --time 01:00:00 # Max walltime #SBATCH --mem 512MB # Max memory module load MATLAB/2021b -# Run the MATLAB script MATLAB_job.m -matlab -nodisplay < MATLAB_job.m +# Run the MATLAB script MATLAB_job.m +matlab -nodisplay < MATLAB_job.m ``` ### Function Example @@ -61,18 +61,18 @@ matlab -nodisplay < MATLAB_job.m module load MATLAB/2021b -#Job run +#Job run matlab -batch "addpath(genpath('../parentDirectory'));myFunction(5,20)" # For versions older than 2019a, use '-nodisplay -r' instead of '-batch' ``` !!! prerequisite Command Line -When using matlab on command line, all flag options use a single '`-`' -e.g. `-nodisplay`, this differs from the GNU convention of using `--` -for command line options of more than one character. + When using matlab on command line, all flag options use a single '`-`' + e.g. `-nodisplay`, this differs from the GNU convention of using `--` + for command line options of more than one character. !!! prerequisite Bash in MATLAB -Using the prefix `!` will allow you to run bash commands from within -MATLAB. e.g. `!squeue -u $USER` will print your currently queued slurm -jobs. + Using the prefix `!` will allow you to run bash commands from within + MATLAB. e.g. `!squeue -u $USER` will print your currently queued slurm + jobs. ## Parallelism @@ -106,11 +106,11 @@ pc.JobStorageLocation = getenv('TMPDIR') parpool(pc, str2num(getenv('SLURM_CPUS_PER_TASK'))) ``` !!! prerequisite Note -Parpool will throw a warning when started due to a difference in how -time zone is specified. To fix this, add the following line to your -SLURM script: `export TZ="Pacific/Auckland'` + Parpool will throw a warning when started due to a difference in how + time zone is specified. To fix this, add the following line to your + SLURM script: `export TZ="Pacific/Auckland'` -The main ways to make use of parpool are; + The main ways to make use of parpool are; **parfor: **Executes each iteration of a loop on a different worker. e.g. @@ -118,7 +118,7 @@ e.g. ``` sl parfor i=1:100 -%Your operation here. + %Your operation here. end ``` @@ -155,9 +155,9 @@ end More info [here](https://au.mathworks.com/help/parallel-computing/parfeval.html). !!! prerequisite Note -When killed (cancelled, timeout, etc), job steps utilising parpool may -show state `OUT_OF_MEMORY`, this is a quirk of how the steps are ended -and not necessarily cause to raise total memory requested. + When killed (cancelled, timeout, etc), job steps utilising parpool may + show state `OUT_OF_MEMORY`, this is a quirk of how the steps are ended + and not necessarily cause to raise total memory requested. ------------------------------------------------------------------------ @@ -166,11 +166,11 @@ Determining which of your variables fall under is a good place to start when attempting to parallelise your code. !!! prerequisite Tip -If your code is parallel at a high level it is preferable to use -[SLURM job -arrays](https://support.nesi.org.nz/hc/en-gb/articles/360000690275-Parallel-Execution#t_array) -as there is less computational overhead and the multiple smaller jobs -will queue faster. + If your code is parallel at a high level it is preferable to use + [SLURM job + arrays](https://support.nesi.org.nz/hc/en-gb/articles/360000690275-Parallel-Execution#t_array) + as there is less computational overhead and the multiple smaller jobs + will queue faster. ## Using GPUs @@ -192,16 +192,16 @@ available GPUs on NeSI, check the [GPU use on NeSI](https://support.nesi.org.nz/hc/en-gb/articles/360001471955) support page. !!! prerequisite Support for A100 GPUs -To use MATLAB with a A100 or a A100-1g.5gb GPU, you need to use a -version of MATLAB supporting the *Ampere* architecture (see [GPU -Support by -Release](https://nl.mathworks.com/help/releases/R2021b/parallel-computing/gpu-support-by-release.html)). -We recommend that you use R2021a or a more recent version. + To use MATLAB with a A100 or a A100-1g.5gb GPU, you need to use a + version of MATLAB supporting the *Ampere* architecture (see [GPU + Support by + Release](https://nl.mathworks.com/help/releases/R2021b/parallel-computing/gpu-support-by-release.html)). + We recommend that you use R2021a or a more recent version. !!! prerequisite Note on GPU cost -A GPU device-hour costs more than a core-hour, depending on the type -of GPU. You can find a comparison table in our [What is an -allocation?](https://support.nesi.org.nz/hc/en-gb/articles/360001385735) -support page. + A GPU device-hour costs more than a core-hour, depending on the type + of GPU. You can find a comparison table in our [What is an + allocation?](https://support.nesi.org.nz/hc/en-gb/articles/360001385735) + support page. ### GPU Example @@ -262,14 +262,14 @@ more info about compiling software on NeSI ### Writing mex functions -This involves the following steps (using C++ as an example): +  This involves the following steps (using C++ as an example): 1. Focus on a loop to extend, preferably a nested set of loops. 2. Identify the input and output variables of the section of code to -extend. + extend. 3. Write C++ code. The name of the C++ file should match the name of -the function to call from MATLAB, e.g. `myFunction.cpp` for a -function named `myFunction`. + the function to call from MATLAB, e.g. `myFunction.cpp` for a + function named `myFunction`. 4. Compile the extension using the MATLAB command `mex myFunction.cpp` At the minimum, the C++ extension should contain: @@ -279,8 +279,8 @@ At the minimum, the C++ extension should contain: #include void mexFunction(int nlhs, mxArray *plhs[], -int nrhs, const mxArray *prhs[]) { -// implementation goes here + int nrhs, const mxArray *prhs[]) { + // implementation goes here } ``` @@ -323,7 +323,7 @@ should feel free to create objects inside C++ code (required for functions that have return values). Some mex function source code examples can be found in the table -[here](https://au.mathworks.com/help/matlab/matlab_external/table-of-mex-file-source-code-files.html). +[here](https://au.mathworks.com/help/matlab/matlab_external/table-of-mex-file-source-code-files.html).  ### Compilation @@ -344,7 +344,7 @@ compilers will be used. Further configuration can be done within MATLAB using the command `mex -setup` -`mex `  will then compile the mex function. +`mex `  will then compile the mex function.  Default compiler flags can be overwritten with by setting the appropriate environment variables. The COMPFLAGS variable is ignored as @@ -359,8 +359,8 @@ it is Windows specific. For example, adding OpenMP flags for a fortran compile: !!! prerequisite Compiler Version Errors -Using an 'unsupported' compiler with versions of MATLAB 2020b onward -will result in an Error (previously was a 'Warning'). + Using an 'unsupported' compiler with versions of MATLAB 2020b onward + will result in an Error (previously was a 'Warning'). ## Known Bugs diff --git a/docs/Scientific_Computing/Supported_Applications/Miniconda3.md b/docs/Scientific_Computing/Supported_Applications/Miniconda3.md index c1377e2a1..ed66fd87d 100644 --- a/docs/Scientific_Computing/Supported_Applications/Miniconda3.md +++ b/docs/Scientific_Computing/Supported_Applications/Miniconda3.md @@ -26,17 +26,17 @@ dependencies in dedicated environment, giving you more freedom to install software yourself at the expense of possibly less optimized packages and no curation by the NeSI team. !!! prerequisite Alternatives -- If you want a more reproducible and isolated environment, we -recommend using the [Singularity -containers](https://support.nesi.org.nz/hc/en-gb/articles/360001107916-Singularity). -- If you only need access to Python and standard numerical libraries -(numpy, scipy, matplotlib, etc.), you can use the [Python -environment -module](https://support.nesi.org.nz/hc/en-gb/articles/207782537-Python). + - If you want a more reproducible and isolated environment, we + recommend using the [Singularity + containers](https://support.nesi.org.nz/hc/en-gb/articles/360001107916-Singularity). + - If you only need access to Python and standard numerical libraries + (numpy, scipy, matplotlib, etc.), you can use the [Python + environment + module](https://support.nesi.org.nz/hc/en-gb/articles/207782537-Python). !!! prerequisite Māui Ancillary Nodes -On Māui Ancillary Nodes, you can also use the `Anaconda3` module, -which provides a default environment pre-installed with a set of -numerical libraries (numpy, scipy, matplotlib, etc.). + On Māui Ancillary Nodes, you can also use the `Anaconda3` module, + which provides a default environment pre-installed with a set of + numerical libraries (numpy, scipy, matplotlib, etc.). ## Module loading and conda environments isolation @@ -53,29 +53,29 @@ export PYTHONNOUSERSITE=1 Here are the explanations for each line of this snippet: - `module purge && module load Miniconda3` ensures that no other -environment module can affect your conda environments. In -particular, the Python environment module change the `PYTHONPATH` -variable, breaking the isolation of the conda environments. If you -need other environment modules, make sure to load them after this -line. + environment module can affect your conda environments. In + particular, the Python environment module change the `PYTHONPATH` + variable, breaking the isolation of the conda environments. If you + need other environment modules, make sure to load them after this + line. - `source $(conda info --base)/etc/profile.d/conda.sh` ensures that -you can use the `conda activate` command. + you can use the `conda activate` command. - `export PYTHONNOUSERSITE=1` makes sure that local packages installed -in your home folder `~/.local/lib/pythonX.Y/site-packages/` (where -`X.Y` is the Python version, e.g. 3.8) by `pip install --user` are -excluded from your conda environments. + in your home folder `~/.local/lib/pythonX.Y/site-packages/` (where + `X.Y` is the Python version, e.g. 3.8) by `pip install --user` are + excluded from your conda environments. !!! prerequisite Do not use `conda init` -We **strongly** recommend against using `conda init`. It inserts a -snippet in your `~/.bashrc` file that will freeze the version of conda -used, bypassing the environment module system. + We **strongly** recommend against using `conda init`. It inserts a + snippet in your `~/.bashrc` file that will freeze the version of conda + used, bypassing the environment module system. !!! prerequisite Māui Ancillary Nodes -On Māui Ancillary Nodes, you need to (re)load the `NeSI` module after -using `module purge`: -``` sl -module purge && module load NeSI Miniconda3 -source $(conda info --base)/etc/profile.d/conda.sh -export PYTHONNOUSERSITE=1 -``` + On Māui Ancillary Nodes, you need to (re)load the `NeSI` module after + using `module purge`: + ``` sl + module purge && module load NeSI Miniconda3 + source $(conda info --base)/etc/profile.d/conda.sh + export PYTHONNOUSERSITE=1 + ``` ## Prevent conda from using /home storage @@ -95,11 +95,11 @@ conda config --add pkgs_dirs /nesi/nobackup//$USER/conda_pkgs where `` should be replace with your project code. This setting is saved in your `~/.condarc` configuration file. !!! prerequisite Note -Your package cache will be subject to the nobackup autodelete process -(details available in the [Nobackup -autodelete](https://support.nesi.org.nz/hc/en-gb/articles/360001162856-Automatic-cleaning-of-nobackup-file-system) -support page). The package cache folder is for temporary storage so it -is safe if files within the cache folder are removed. + Your package cache will be subject to the nobackup autodelete process + (details available in the [Nobackup + autodelete](https://support.nesi.org.nz/hc/en-gb/articles/360001162856-Automatic-cleaning-of-nobackup-file-system) + support page). The package cache folder is for temporary storage so it + is safe if files within the cache folder are removed. Next, we recommend using the `-p` or `--prefix` options when creating new conda environments, instead of `-n` or `--name` options. Using `-p` @@ -123,13 +123,13 @@ environment from an `environment.yml` file: conda env create -f environment.yml -p /nesi/project//my_conda_env ``` !!! prerequisite Reduce prompt prefix -By default, when activating a conda environment created with `-p` or -`--prefix`, the entire path of the environment is be added to the -prompt. To remove this long prefix in your shell prompt, use the -following configuration: -``` sl -conda config --set env_prompt '({name})' -``` + By default, when activating a conda environment created with `-p` or + `--prefix`, the entire path of the environment is be added to the + prompt. To remove this long prefix in your shell prompt, use the + following configuration: + ``` sl + conda config --set env_prompt '({name})' + ``` ## Faster solver `mamba` (experimental feature) diff --git a/docs/Scientific_Computing/Supported_Applications/ORCA.md b/docs/Scientific_Computing/Supported_Applications/ORCA.md index 025d928c1..60d412b56 100644 --- a/docs/Scientific_Computing/Supported_Applications/ORCA.md +++ b/docs/Scientific_Computing/Supported_Applications/ORCA.md @@ -90,12 +90,12 @@ directory from which the ORCA executable is invoked. To restart from an existing GBW file, you should do the following: 1. Ensure that the GBW file you want to start from is renamed so that -it does not have the same base name as your intended input file. -Otherwise, it will be overwritten and destroyed as soon as ORCA -starts running. + it does not have the same base name as your intended input file. + Otherwise, it will be overwritten and destroyed as soon as ORCA + starts running. 2. In your input file, specify the following lines, replacing -"checkpoint.gbw" with the name of the GBW file you intend to read -from: + "checkpoint.gbw" with the name of the GBW file you intend to read + from: ``` sl ! moread diff --git a/docs/Scientific_Computing/Supported_Applications/OpenFOAM.md b/docs/Scientific_Computing/Supported_Applications/OpenFOAM.md index 6e8a37036..08d24e877 100644 --- a/docs/Scientific_Computing/Supported_Applications/OpenFOAM.md +++ b/docs/Scientific_Computing/Supported_Applications/OpenFOAM.md @@ -25,8 +25,8 @@ zendesk_section_id: 360000040076 OpenFOAM (Open Field Operation And Manipulation) is a open-source C++ toolbox maintained by the OpenFOAM foundation and ESI Group. Although primarily used for CFD (Computational Fluid Dynamics) OpenFOAM can be -used in a wide range of fields from solid mechanics to chemistry. - +used in a wide range of fields from solid mechanics to chemistry. + The lack of licence limitations and native parallelisation makes OpenFOAM well suited for a HPC environment. OpenFOAM is an incredibly powerful tool, but does require a moderate degree of computer literacy @@ -66,8 +66,8 @@ module load OpenFOAM/v1712-gimkl-2017a source ${FOAM_BASH} decomposePar #Break domain into pieces for parallel execution. -srun simpleFoam -parallel -reconstructPar -latestTime #Collect +srun simpleFoam -parallel +reconstructPar -latestTime #Collect ``` ## Filesystem Limitations @@ -81,44 +81,44 @@ write there to crash.** There are a few ways to mitigate this -- **Use** `/nesi/nobackup` -The nobackup directory has a significantly higher inode count and no -disk space limits. - -- **ControlDict Settings** -- `WriteInterval` -Using a high write interval reduce number of output files and -I/O load. -- `deltaT` -Consider carefully an appropriate time-step, use adjustTimeStep -if suitable. -- `purgeWrite` -Not applicable for many jobs, this keeps only the last n steps, -e.g. purgeWrite 5 will keep the last 5 time-steps, with the -directories being constantly overwritten. -- `runTimeModifiable` -When true, dictionaries will be re-read at the start of every -time step. Setting this to false will decrease I/O load. -- `writeFormat` -Setting this to binary as opposed to ascii will decrease disk -use and I/O load. - -- **Monitor Filesystem ** -The command `nn_storage_quota` should be used to track filesystem -usage. There is a delay between making changes to a filesystem and -seeing it on `nn_storage_quota`. - -``` sl -Filesystem Available Used Use% Inodes IUsed IUse% -home_cwal219 20G 1.957G 9.79% 92160 21052 22.84% -project_nesi99999 2T 798G 38.96% 100000 66951 66.95% -nobackup_nesi99999 6.833T 10000000 2691383 26.91% -``` - -- **Contact Support** -If you are following the recommendations here yet are still -concerned about indoes, open a support ticket and we can raise the -limit for you. +- **Use** `/nesi/nobackup` + The nobackup directory has a significantly higher inode count and no + disk space limits. + +- **ControlDict Settings** + - `WriteInterval` + Using a high write interval reduce number of output files and + I/O load. + - `deltaT` + Consider carefully an appropriate time-step, use adjustTimeStep + if suitable. + - `purgeWrite` + Not applicable for many jobs, this keeps only the last n steps, + e.g. purgeWrite 5 will keep the last 5 time-steps, with the + directories being constantly overwritten. + - `runTimeModifiable` + When true, dictionaries will be re-read at the start of every + time step. Setting this to false will decrease I/O load. + - `writeFormat` + Setting this to binary as opposed to ascii will decrease disk + use and I/O load. + +- **Monitor Filesystem ** + The command `nn_storage_quota` should be used to track filesystem + usage. There is a delay between making changes to a filesystem and + seeing it on `nn_storage_quota`. + + ``` sl + Filesystem Available Used Use% Inodes IUsed IUse% + home_cwal219 20G 1.957G 9.79% 92160 21052 22.84% + project_nesi99999 2T 798G 38.96% 100000 66951 66.95% + nobackup_nesi99999 6.833T 10000000 2691383 26.91% + ``` + +- **Contact Support** + If you are following the recommendations here yet are still + concerned about indoes, open a support ticket and we can raise the + limit for you. ## Environment Variables @@ -135,7 +135,7 @@ Or create your variables to be set in your Slurm script. startFrom ${START_TIME}; ``` -This is essential when running parameter sweeps. + This is essential when running parameter sweeps. You can also directly edit your dictionaries with `sed`, e.g. @@ -144,7 +144,7 @@ NSUBDOMAINS=10 sed -i "s/\(numberOfSubdomains \)[[:digit:]]*\(;\)/\1 $NSUBDOMAINS\2/g" system/controlDict ``` -## Recommended Resources +## Recommended Resources    Generally, using 16 or less tasks will keep your job reasonably efficient. However this is *highly* dependant on the type of simulation @@ -173,7 +173,7 @@ into your terminal after `wget`. For example: wget https://github.com/vincentcasseau/hyStrath/archive/Concordia.tar.gz ``` -wget can also be used to fetch files from other sources. + wget can also be used to fetch files from other sources. #### If repo only @@ -183,7 +183,7 @@ Use the command `git clone .git` For example: git clone https://github.com/vincentcasseau/hyStrath.git ``` -### Decompress +### Decompress  If your source is a .zip file use the command `unzip ` if it is a .tar.gz use the command `tar -xvzf ` @@ -202,7 +202,7 @@ A library/application named 'newApp' would have the structure. ![](../../assets/images/OpenFOAM_0.png) -To build \`newApp\` one would run: + To build \`newApp\` one would run: ``` sl module load OpenFOAM @@ -223,7 +223,7 @@ specifies variables `$FOAM_USER_LIBBIN` or `$FOAM_USER_APPBIN` instead. User compiled libraries are kept in `$FOAM_USER_LIBBIN`, by default this is set -to `~/$USER/OpenFOAM/$USER-/platforms/linux64GccDPInt32Opt/lib` +to `~/$USER/OpenFOAM/$USER-/platforms/linux64GccDPInt32Opt/lib`  User compiled objects are kept in `$FOAM_USER_APPBIN`, by default this is set @@ -236,9 +236,9 @@ For example ``` sl module load OpenFOAM - + source $FOAM_BASH - + export FOAM_USER_LIBBIN=/nesi/project/nesi99999/custom_OF/lib export FOAM_USER_APPBIN=/nesi/project/nesi99999/custom_OF/bin ``` @@ -246,5 +246,5 @@ export FOAM_USER_APPBIN=/nesi/project/nesi99999/custom_OF/bin These variables need to be set to the same chosen paths before compiling and before running the solvers. !!! prerequisite Warning -Make sure to `export` your custom paths before `source $FOAM_BASH` -else they will be reset to default. \ No newline at end of file + Make sure to `export` your custom paths before `source $FOAM_BASH` + else they will be reset to default. \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/OpenSees.md b/docs/Scientific_Computing/Supported_Applications/OpenSees.md index 11f640123..258006372 100644 --- a/docs/Scientific_Computing/Supported_Applications/OpenSees.md +++ b/docs/Scientific_Computing/Supported_Applications/OpenSees.md @@ -25,7 +25,7 @@ There are three commands with which a OpenSees job can be launched. - OpenSeesSP - Intended for the single analysis of very large models. - OpenSeesMP - For advanced parametric studies. - +  More info can be found about running OpenSees in parallel [here](http://opensees.berkeley.edu/OpenSees/parallel/TNParallelProcessing.pdf). @@ -78,5 +78,6 @@ Retrieved in Tcl script: puts $::env(MY_VARIABLE) ``` +  - +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/ParaView.md b/docs/Scientific_Computing/Supported_Applications/ParaView.md index 1e4b34e22..28c9bd467 100644 --- a/docs/Scientific_Computing/Supported_Applications/ParaView.md +++ b/docs/Scientific_Computing/Supported_Applications/ParaView.md @@ -25,7 +25,7 @@ zendesk_section_id: 360000040076 visualisation tool. The headless versions only provide ParaView Server, which can operate in batch mode, as well as in client-server operation. - +  ### Available Modules @@ -43,8 +43,8 @@ which can operate in batch mode, as well as in client-server operation. | ParaView/5.6.0-gimpi-2017a-Server-OSMesa |   |   | ✔ |   | | ParaView/5.6.0-gimpi-2018b |   |   | ✔ |   | !!! prerequisite Note -The ParaView server loaded must be the same version as the client you -have installed locally. + The ParaView server loaded must be the same version as the client you + have installed locally. @@ -54,47 +54,47 @@ If you want to use ParaView in client-server mode, use the following setup: - Load one of the ParaView Server modules listed above and launch the -server in your interactive visualisation session on the HPC using; + server in your interactive visualisation session on the HPC using; -``` sl -module load ParaView -``` + ``` sl + module load ParaView + ``` -- To start the ParaView server run; +- To start the ParaView server run; -``` sl -pvserver -``` + ``` sl + pvserver + ``` - You should see; -``` sl -Waiting for client... -Connection URL: cs://mahuika02:11111 -Accepting connection(s): mahuika02:11111 -``` + ``` sl + Waiting for client... + Connection URL: cs://mahuika02:11111 + Accepting connection(s): mahuika02:11111 + ``` - Create an SSH tunnel for port "11111" from your local machine to the -cluster. e.g. + cluster. e.g. -``` sl -ssh mahuika -L 11111:mahuika02:11111 -``` + ``` sl + ssh mahuika -L 11111:mahuika02:11111 + ``` -Make sure the host name and socket match those given by the server -earlier! + Make sure the host name and socket match those given by the server + earlier! - Launch the ParaView GUI on your local machine and go to "File > -Connect" or click -the ![mceclip0.png](../../assets/images/ParaView.png) button. + Connect" or click + the ![mceclip0.png](../../assets/images/ParaView.png) button. - Click on "Add Server", choose server type "Client / Server", host -"localhost" (as we will be using the SSH tunnel), and port "11111", -then click on "Configure" . + "localhost" (as we will be using the SSH tunnel), and port "11111", + then click on "Configure" . - ![mceclip1.png](../../assets/images/ParaView_0.png) diff --git a/docs/Scientific_Computing/Supported_Applications/Python.md b/docs/Scientific_Computing/Supported_Applications/Python.md index 3ced65db3..d9a1f7790 100644 --- a/docs/Scientific_Computing/Supported_Applications/Python.md +++ b/docs/Scientific_Computing/Supported_Applications/Python.md @@ -42,12 +42,12 @@ Python packages for computational work such as *numpy*, *scipy*, Our most recent Python environment modules have: - *multiprocessing.cpu\_count()* patched to return only the number of -CPUs available to the process, which in a Slurm job can be fewer -than the number of CPUs on the node. + CPUs available to the process, which in a Slurm job can be fewer + than the number of CPUs on the node. - PYTHONUSERBASE set to a path which includes the toolchain, so that -incompatible builds of the same version of Python don't attempt to -share user-installed libraries. + incompatible builds of the same version of Python don't attempt to + share user-installed libraries. ## Example scripts @@ -66,14 +66,14 @@ python MyPythonScript.py ### MPI Example ``` sl -#!/bin/bash -e -#SBATCH --job-name=PythonMPI -#SBATCH --ntasks=2 # Number of MPI tasks -#SBATCH --time=00:30:00 -#SBATCH --mem-per-cpu=512MB # Memory per logical CPU - -module load Python -srun python PythonMPI.py # Executes ntasks copies of the script + #!/bin/bash -e + #SBATCH --job-name=PythonMPI + #SBATCH --ntasks=2 # Number of MPI tasks + #SBATCH --time=00:30:00 + #SBATCH --mem-per-cpu=512MB # Memory per logical CPU + + module load Python + srun python PythonMPI.py # Executes ntasks copies of the script ``` ``` sl @@ -93,11 +93,11 @@ rank_data += 1 # gather the data back to rank 0 data_gather = comm.gather(rank_data, root = 0) -# on rank 0 sum the gathered data and print both the sum of, +# on rank 0 sum the gathered data and print both the sum of, # and the unsummed data if rank == 0: -print('Gathered data:', data_gather) -print('Sum:', sum(data_gather)) + print('Gathered data:', data_gather) + print('Sum:', sum(data_gather)) ``` The above Python script will create a list of numbers (0-9) split @@ -109,47 +109,47 @@ data is printed. #### Multiprocessing Example ``` sl -#!/bin/bash -e -#SBATCH --job-name=PytonMultiprocessing -#SBATCH --cpus-per-task=2 # Number of logical CPUs -#SBATCH --time=00:10:00 -#SBATCH --mem-per-cpu=512MB # Memory per logical CPU - -module load Python -python PythonMultiprocessing.py + #!/bin/bash -e + #SBATCH --job-name=PytonMultiprocessing + #SBATCH --cpus-per-task=2 # Number of logical CPUs + #SBATCH --time=00:10:00 + #SBATCH --mem-per-cpu=512MB # Memory per logical CPU + + module load Python + python PythonMultiprocessing.py ``` ``` sl import multiprocessing def calc_square(numbers, result1): -for idx, n in enumerate(numbers): -result1[idx] = n*n + for idx, n in enumerate(numbers): + result1[idx] = n*n def calc_cube(numbers, result2): -for idx, n in enumerate(numbers): -result2[idx] = n*n*n + for idx, n in enumerate(numbers): + result2[idx] = n*n*n if __name__ == "__main__": -numbers = [2,3,4] -# Sets up the shared memory variables, allowing the variables to be -# accessed globally across processes -result1 = multiprocessing.Array('i',3) -result2 = multiprocessing.Array('i',3) -# set up the processes -p1 = multiprocessing.Process(target=calc_square, args=(numbers,result1,)) -p2 = multiprocessing.Process(target=calc_cube, args=(numbers,result2,)) - -# start the processes -p1.start() -p2.start() - -# end the processes -p1.join() -p2.join() - -print(result1[:]) -print(result2[:]) + numbers = [2,3,4] + # Sets up the shared memory variables, allowing the variables to be + # accessed globally across processes + result1 = multiprocessing.Array('i',3) + result2 = multiprocessing.Array('i',3) + # set up the processes + p1 = multiprocessing.Process(target=calc_square, args=(numbers,result1,)) + p2 = multiprocessing.Process(target=calc_cube, args=(numbers,result2,)) + + # start the processes + p1.start() + p2.start() + + # end the processes + p1.join() + p2.join() + + print(result1[:]) + print(result2[:]) ``` The above Python script will calculated the square and cube of an array @@ -215,22 +215,22 @@ import argparse # get tests from file class LoadFromFile(argparse.Action): -""" -class for reading arguments from file -""" -def __call__(self, parser, namespace, values, option_string=None): -with values as F: -vals = F.read().split() -setattr(namespace, self.dest, vals) + """ + class for reading arguments from file + """ + def __call__(self, parser, namespace, values, option_string=None): + with values as F: + vals = F.read().split() + setattr(namespace, self.dest, vals) def get_args(): -""" -Definition of the input arguments -""" -parser = argparse.ArgumentParser(description='Hello World') -parser.add_argument('-ID', type=int, action='store', dest='my_id', -help='Slurm ID') -return parser.parse_args() + """ + Definition of the input arguments + """ + parser = argparse.ArgumentParser(description='Hello World') + parser.add_argument('-ID', type=int, action='store', dest='my_id', + help='Slurm ID') + return parser.parse_args() ARGS = get_args() @@ -419,7 +419,7 @@ Unicode object representing an integer literal in the given base. The literal can be preceded by '+' or '-' and be surrounded by whitespace. The base defaults to 10. Valid bases are 0 and 2-36. Base 0 means to interpret the base from the string as an integer literal. ->> int('0b100', base=0) + >> int('0b100', base=0) 4 ``` diff --git a/docs/Scientific_Computing/Supported_Applications/R.md b/docs/Scientific_Computing/Supported_Applications/R.md index 289427d1b..a0393047e 100644 --- a/docs/Scientific_Computing/Supported_Applications/R.md +++ b/docs/Scientific_Computing/Supported_Applications/R.md @@ -51,11 +51,11 @@ General Public Licence. The full text of the R licence is available at ## NeSI Customisations - We patch the *snow* package so that there is no need to use RMPISNOW -when using it over MPI. + when using it over MPI. - Our most recent R environment modules set R\_LIBS\_USER to a path -which includes the compiler toolchain, so for -example *~/R/gimkl-2022a/4.2* rather than the usual default -of *~/R/x86\_64-pc-linux-gnu-library/4.2*. + which includes the compiler toolchain, so for + example *~/R/gimkl-2022a/4.2* rather than the usual default + of *~/R/x86\_64-pc-linux-gnu-library/4.2*. ## Related environment modules @@ -63,15 +63,15 @@ We also have some environment modules which extend the base R ones with extra packages: -  *R-Geo* with rgeos, rgdal and other geometric and geospatial -packages based on the libraries GEOS, GDAL, PROJ and UDUNITS. -- ``` sl -$ module load R-Geo/4.2.1-gimkl-2022a -``` + packages based on the libraries GEOS, GDAL, PROJ and UDUNITS. + - ``` sl + $ module load R-Geo/4.2.1-gimkl-2022a + ``` - *R-bundle-Bioconductor* with many of the BioConductor suite of -packages. -- ``` sl -$ module load R-bundle-Bioconductor/3.15-gimkl-2022a-R-4.2.1 -``` + packages. + - ``` sl + $ module load R-bundle-Bioconductor/3.15-gimkl-2022a-R-4.2.1 + ``` ## Examples @@ -103,7 +103,7 @@ of sizes 1 million to 1000050. Set the number of workers in your submission script with --cpus-per-task=... Note that all workers run on the same node. Hence, the number of workers is limited to the number of cores (physical if --hint=nomultithread or logical if using ---hint=multithread). +--hint=multithread).  ``` sl library(doParallel) @@ -111,7 +111,7 @@ registerDoParallel(strtoi(Sys.getenv("SLURM_CPUS_PER_TASK"))) # 50 calculations, store the result in 'x' x <- foreach(z = 1000000:1000050, .combine = 'c') %dopar% { -sum(rnorm(z)) + sum(rnorm(z)) } print(x) @@ -123,7 +123,7 @@ This example is similar to the above except that workers can run across multiple nodes. Note that we don't need to specify the number of workers when starting the cluster -- it will be derived by the mpiexec command, which slurm will invoke. You will need to load the gimkl module to -expose the MPI library. +expose the MPI library.  ``` sl library(doMPI, quiet=TRUE) @@ -132,7 +132,7 @@ registerDoMPI(cl) # 50 calculations, store the result in 'x' x <- foreach(z = 1000000:1000050, .combine = 'c') %dopar% { -sum(rnorm(z)) + sum(rnorm(z)) } closeCluster(cl) @@ -148,9 +148,9 @@ library(snow) # Select MPI-based or fork-based parallelism depending on ntasks if(strtoi(Sys.getenv("SLURM_NTASKS")) > 1) { -cl <- makeMPIcluster() + cl <- makeMPIcluster() } else { -cl <- makeSOCKcluster(max(strtoi(Sys.getenv('SLURM_CPUS_PER_TASK')), 1)) + cl <- makeSOCKcluster(max(strtoi(Sys.getenv('SLURM_CPUS_PER_TASK')), 1)) } # 50 calculations to be done: @@ -264,7 +264,7 @@ so, call up the package library: $ module R/4.2.1-gimkl-2022a $ R ... -library() + library() ``` or just use the module command: @@ -284,14 +284,14 @@ You can print a list of the library directories in which R will look for packages by running the following command in an R session: ``` sl -.libPaths() + .libPaths() ``` For R/4.2.1 the command `.libPaths()` will return the following: ``` sl -.libPaths() -[1] "/home/YOUR_USER_NAME/R/gimkl-2022a/4.2" + .libPaths() +[1] "/home/YOUR_USER_NAME/R/gimkl-2022a/4.2" [2] "/opt/nesi/CS400_centos7_bdw/R/4.2.1-gimkl-2022a/lib64/R/library" ``` @@ -302,7 +302,7 @@ provided by NeSI. This can be used in conjuction with eg: ``` sl -installed.packages("/home/YOUR_USER_NAME/R/gimkl-2022a/4.2") + installed.packages("/home/YOUR_USER_NAME/R/gimkl-2022a/4.2") ... ggplot2 NA NA NA "no" "4.2.1" ggrepel NA NA NA "yes" "4.2.1" @@ -332,7 +332,7 @@ dir.create("/nesi/project//Rpackages", showWarnings = FALSE, recursiv .libPaths(new="/nesi/project//Rpackages") ``` - +  #### Downloading and installing a new package @@ -344,7 +344,7 @@ For example, to install the sampling package: $ module load R/4.2.1-gimkl-2022a $ R ... -install.packages("sampling") + install.packages("sampling") ``` You will most likely be asked if you want to use a personal library and, @@ -361,7 +361,7 @@ You can confirm the package has been installed by using the library() command: ``` sl -library("foo") + library("foo") ``` If the package has been correctly installed, you will get no response. @@ -369,7 +369,7 @@ On the other hand, if the package is missing or was not installed correctly, an error message will typically be returned: ``` sl -library("foo") + library("foo") Error in library("foo") : there is no package called ‘foo’ ``` @@ -389,7 +389,7 @@ library in your R script: ``` sl $ R ... -dyn.load("~/R/lib64/mylib.so") + dyn.load("~/R/lib64/mylib.so") ``` ### Quitting an interactive R session @@ -397,14 +397,14 @@ dyn.load("~/R/lib64/mylib.so") At the R command prompt, when you want to quit R, type the following: ``` sl -quit() + quit() ``` You will be asked "Save workspace image? \[y/n/c\]". Type n. +  - -## Troubleshooting +## Troubleshooting ### Missing *devtools* @@ -417,9 +417,9 @@ $ module load devtools $ module load R/4.2.1-gimkl-2022a ``` +  - -### Can't install *sf, rgdal* etc +### Can't install *sf, rgdal* etc  Use the R-Geo module @@ -427,7 +427,7 @@ Use the R-Geo module $ module load R-Geo/4.2.1-gimkl-2022a ``` - +  ### Cluster/Parallel environment variable not accessed diff --git a/docs/Scientific_Computing/Supported_Applications/RAxML.md b/docs/Scientific_Computing/Supported_Applications/RAxML.md index 7930e31f7..fa2c7284d 100644 --- a/docs/Scientific_Computing/Supported_Applications/RAxML.md +++ b/docs/Scientific_Computing/Supported_Applications/RAxML.md @@ -30,7 +30,7 @@ RAxML search algorithm for maximum likelihood based inference of phylogenetic trees. The RAxML home page is at . - +  ## Licensing requirements @@ -80,11 +80,11 @@ The combinations of Slurm settings and RAxML types which make sense are: - `raxmlHPC-AVX` or `raxmlHPC-SSE3` with one task on only one CPU. - `raxmlHPC-PTHREADS-AVX` or `raxmlHPC-PTHREADS-SSE3` with one task -running on multiple CPUs. + running on multiple CPUs. - `raxmlHPC-MPI-AVX` or `raxmlHPC-MPI-SSE3` with multiple tasks, each -running on one CPU. + running on one CPU. - `raxmlHPC-HYBRID-AVX` or `raxmlHPC-HYBRID-SSE3` with multiple tasks, -each of which runs on multiple CPUs. + each of which runs on multiple CPUs. MPI and HYBRID are only useful for bootstrapped trees. diff --git a/docs/Scientific_Computing/Supported_Applications/Relion.md b/docs/Scientific_Computing/Supported_Applications/Relion.md index e55bb6d7e..f3d2ccbfa 100644 --- a/docs/Scientific_Computing/Supported_Applications/Relion.md +++ b/docs/Scientific_Computing/Supported_Applications/Relion.md @@ -24,7 +24,7 @@ zendesk_section_id: 360000040076 [//]: <> (REMOVE ME IF PAGE VALIDATED) Getting started with Relion is most easily done via its X11 GUI, which -is launched with the command "relion". +is launched with the command "relion".   ``` sl $ module load Relion @@ -63,7 +63,7 @@ srun relion_run_ctffind_mpi ... We have made some effort to integrate the Relion GUI directly with Slurm so that it can submit Slurm jobs directly, however this might not -entirely work yet. +entirely work yet.  Some of the Relion tools benefit tremendously from using a GPU. @@ -71,5 +71,6 @@ For licensing reasons we ask that you install the GPU accelerated *MotionCorr2* yourself if you find Relion's own CPU-only version of the same algorithm insufficient. +  - +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/Singularity.md b/docs/Scientific_Computing/Supported_Applications/Singularity.md index c61af9cd5..199759d91 100644 --- a/docs/Scientific_Computing/Supported_Applications/Singularity.md +++ b/docs/Scientific_Computing/Supported_Applications/Singularity.md @@ -70,7 +70,7 @@ supported Singularity version. For more general information on building containers please see the [Singularity -Documentation](https://sylabs.io/guides/3.0/user-guide/build_a_container.html). +Documentation](https://sylabs.io/guides/3.0/user-guide/build_a_container.html).  As building a container requires root privileges in general, this cannot be done directly on any NeSI nodes. You will need to copy a [Singularity @@ -130,12 +130,12 @@ A container in Singularity's SIF format can be easily moved to the HPC filesystem by: - Copying the image file from your local computer with basic file -transfer tools - please refer to our documentation on [Moving files -to/from the -cluster](https://support.nesi.org.nz/hc/en-gb/articles/360000578455) -and [Data Transfer using -Globus](https://support.nesi.org.nz/hc/en-gb/articles/360000576776) -(if you have a very large container) for details + transfer tools - please refer to our documentation on [Moving files + to/from the + cluster](https://support.nesi.org.nz/hc/en-gb/articles/360000578455) + and [Data Transfer using + Globus](https://support.nesi.org.nz/hc/en-gb/articles/360000576776) + (if you have a very large container) for details - Downloading the container from an online repository To download a container, use commands such as @@ -175,7 +175,7 @@ Bootstrap: docker From: ubuntu:latest %post -# intallation instructions go here + # intallation instructions go here ``` ## Running a container on Mahuika or Māui Ancil @@ -271,11 +271,11 @@ access it using the `--nv` flag: singularity run --nv my_container.sif ``` !!! prerequisite Note -Make sure that your container contains the CUDA toolkit and additional -libraries needed by your application (e.g. cuDNN). The `--nv` option -only ensures that the basic CUDA libraries from the host are bound -into the container and that the GPU device is accessible in the -container. + Make sure that your container contains the CUDA toolkit and additional + libraries needed by your application (e.g. cuDNN). The `--nv` option + only ensures that the basic CUDA libraries from the host are bound + into the container and that the GPU device is accessible in the + container. ### Network isolation @@ -321,20 +321,20 @@ further details on using Slurm. ## Tips & Tricks - Make sure that your container runs before uploading it - you will -not be able to rebuild it from a new definition file directly on the -HPC + not be able to rebuild it from a new definition file directly on the + HPC - Try to configure all software to run in user space without requiring -privilege escalation via "sudo" or other privileged capabilities -such as reserved network ports - although Singularity supports some -of these features inside a container on some systems, they may not -always be available on the HPC or other platforms, therefore relying -on features such as Linux user namespaces could limit the -portability of your container + privilege escalation via "sudo" or other privileged capabilities + such as reserved network ports - although Singularity supports some + of these features inside a container on some systems, they may not + always be available on the HPC or other platforms, therefore relying + on features such as Linux user namespaces could limit the + portability of your container - If your container runs an MPI application, make sure that the MPI -distribution that is installed inside the container is compatible -with Intel MPI + distribution that is installed inside the container is compatible + with Intel MPI - Write output data and log files to the HPC file system using a -directory that is bound into the container - this helps -reproducibility of results by keeping the container image immutable, -it makes sure that you have all logs available for debugging if a -job crashes, and it avoids inflating the container image file \ No newline at end of file + directory that is bound into the container - this helps + reproducibility of results by keeping the container image immutable, + it makes sure that you have all logs available for debugging if a + job crashes, and it avoids inflating the container image file \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/Software Installation Request.md b/docs/Scientific_Computing/Supported_Applications/Software Installation Request.md index c6765384a..58b7dac12 100644 --- a/docs/Scientific_Computing/Supported_Applications/Software Installation Request.md +++ b/docs/Scientific_Computing/Supported_Applications/Software Installation Request.md @@ -26,36 +26,36 @@ team](mailto:support@nesi.org.nz?subject=New%20software%20request). In your message, please provide the following information: - What is the name and version number of the software you would like -to be installed? If you wish to use a copy from a version control -repository, what tag or release do you need? Please be aware that we -usually require a stable release version of a piece of software -before we will install it for all users. + to be installed? If you wish to use a copy from a version control + repository, what tag or release do you need? Please be aware that we + usually require a stable release version of a piece of software + before we will install it for all users. - Do you have a preference about which platform (Mahuika or Māui) we -install it on? + install it on? - Why would you like us to install this software package? - What is the web site or home web page of the package? If you don't -know this information or the package doesn't have a web site, who is -the author or lead developer? In some cases, there exist two or more -packages with the same or very similar names. If we know the web -site we can be sure that we are installing the same package that you -are requesting. + know this information or the package doesn't have a web site, who is + the author or lead developer? In some cases, there exist two or more + packages with the same or very similar names. If we know the web + site we can be sure that we are installing the same package that you + are requesting. - How is the package installed? For example, compiled from source, -precompiled binary, or installed as a Python, Perl, R, etc. library? + precompiled binary, or installed as a Python, Perl, R, etc. library? - What dependencies, if any, does the package require? Please be aware -that the exact dependency list may depend on the particular use -cases you have in mind (like the ability to read and write a -specific file format). + that the exact dependency list may depend on the particular use + cases you have in mind (like the ability to read and write a + specific file format). - Have you (or another member of your project team) tried to install -it yourself on a NeSI system? If so, were you successful? + it yourself on a NeSI system? If so, were you successful? - If you or your institution doesn't own the copyright in the -software, under what licence are you permitted to use it? Does that -licence allow you to install and run it on a NeSI system? (Hint: -Most free, open-source software licences will allow you to do this.) + software, under what licence are you permitted to use it? Does that + licence allow you to install and run it on a NeSI system? (Hint: + Most free, open-source software licences will allow you to do this.) - Who else do you know of who wants to use that software on a NeSI -system? Please provide their names, institutional affiliations, and -NeSI project codes (if you know them). + system? Please provide their names, institutional affiliations, and + NeSI project codes (if you know them). - What tests do you have that will allow us to verify that the -software is performing correctly and at an acceptable speed? + software is performing correctly and at an acceptable speed? Our team will review your request and will make a decision as to whether we will install the application and make it generally available. \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/Supernova.md b/docs/Scientific_Computing/Supported_Applications/Supernova.md index 8976027a4..5d9004009 100644 --- a/docs/Scientific_Computing/Supported_Applications/Supernova.md +++ b/docs/Scientific_Computing/Supported_Applications/Supernova.md @@ -33,18 +33,18 @@ The Supernova software package includes two processing pipelines and one for post-processing: - **`supernova mkfastq`** wraps Illumina's bcl2fastq to correctly -demultiplex Chromium-prepared sequencing samples and to convert -barcode and read data to FASTQ files. + demultiplex Chromium-prepared sequencing samples and to convert + barcode and read data to FASTQ files. - **`supernova run`** takes FASTQ files containing barcoded reads from -`supernova mkfastq` and builds a graph-based assembly. The approach -is to first build an assembly using read kmers (K = 48), then -resolve this assembly using read pairs (to K = 200), then use -barcodes to effectively resolve this assembly to K ≈ 100,000. The -final step pulls apart homologous chromosomes into phase blocks, -which are often several megabases in length. + `supernova mkfastq` and builds a graph-based assembly. The approach + is to first build an assembly using read kmers (K = 48), then + resolve this assembly using read pairs (to K = 200), then use + barcodes to effectively resolve this assembly to K ≈ 100,000. The + final step pulls apart homologous chromosomes into phase blocks, + which are often several megabases in length. - **`supernova mkoutput`** takes Supernova's graph-based assemblies -and produces several styles of FASTA suitable for downstream -processing and analysis. + and produces several styles of FASTA suitable for downstream + processing and analysis. Download latest release from 10xGenomics. @@ -87,13 +87,13 @@ We suggest users initially read the developers notes, at Further to that we also suggest, - check --maxreads, to be passed to supernova, is correctly set. -Recommended -reading..[https://bioinformatics.uconn.edu/genome-size-estimation-tutorial/# -](https://bioinformatics.uconn.edu/genome-size-estimation-tutorial/#) + Recommended + reading..[https://bioinformatics.uconn.edu/genome-size-estimation-tutorial/# +  ](https://bioinformatics.uconn.edu/genome-size-estimation-tutorial/#) - When passing `--localmem` to supernova, ensure this number is less -than the total memory passed to Slurm. + than the total memory passed to Slurm.  - Pass `${SLURM_CPUS_PER_TASK}` to supernova with the `--localcores` -argument. + argument. ## Tracking job progress via browser @@ -102,7 +102,7 @@ the call to supernova was run, or the path specified in the Slurm batch file via `--output`. ``` bash -head -n 30 .out + head -n 30 .out supernova run (2.1.1) @@ -115,10 +115,10 @@ Serving UI at http://wbh001:37982?auth=Bx2ccMZmJxaIfRNBOZ_XO_mQd1njNGL3rZry_eNI1 Running preflight checks (please wait)... ``` -Find the line.. + Find the line.. ``` bash -Serving UI at http://wbh001:37982?auth=Bx2ccMZmJxaIfRNBOZ_XO_mQd1njNGL3rZry_eNI1yU +Serving UI at http://wbh001:37982?auth=Bx2ccMZmJxaIfRNBOZ_XO_mQd1njNGL3rZry_eNI1yU  ``` The link assumes the form.. @@ -128,7 +128,7 @@ The link assumes the form.. - <node> Taken from above code snippet is wbh001 - <port> Taken from above code snippet is 37982 - <auth> Taken from above code snippet is -Bx2ccMZmJxaIfRNBOZ\_XO\_mQd1njNGL3rZry\_eNI1yU + Bx2ccMZmJxaIfRNBOZ\_XO\_mQd1njNGL3rZry\_eNI1yU  In a new local terminal window open an ssh tunnel to the node. This takes the following general form @@ -136,8 +136,8 @@ takes the following general form **`ssh -L :: -N `** - <d> An integer -- <server> see: [ -https://support.nesi.org.nz/hc/en-gb/articles/360000625535-Recommended-Terminal-Setup](https://support.nesi.org.nz/hc/en-gb/articles/360000625535-Recommended-Terminal-Setup) +- <server> see: [ + https://support.nesi.org.nz/hc/en-gb/articles/360000625535-Recommended-Terminal-Setup](https://support.nesi.org.nz/hc/en-gb/articles/360000625535-Recommended-Terminal-Setup) When details are added to the general form from the specifics in the snippet above, the following could be run.. @@ -160,14 +160,14 @@ http://localhost:9999/?auth=Bx2ccMZmJxaIfRNBOZ_XO_mQd1njNGL3rZry_eNI1yU ![Screen\_Shot\_2019-01-28\_at\_2.17.29\_PM.png](../../assets/images/Supernova.png) - +  ## Things to watch out for - Supernova will create checkpoints after completing stages in the -pipeline. In order to run from a previously created checkpoint you -will first need to delete the \_lock file located in the main output -directory (the directory named by `ID=${SLURM_JOB_NAME}` where the -`_log ` file is also located) and passed to supernova in the -`--id=${ID}` argument in the sample Slurm script above. Avoid -changing any other settings in both the call to Slurm and supernova. \ No newline at end of file + pipeline. In order to run from a previously created checkpoint you + will first need to delete the \_lock file located in the main output + directory (the directory named by `ID=${SLURM_JOB_NAME}` where the + `_log ` file is also located) and passed to supernova in the + `--id=${ID}` argument in the sample Slurm script above. Avoid + changing any other settings in both the call to Slurm and supernova. \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/Synda.md b/docs/Scientific_Computing/Supported_Applications/Synda.md index 082cc8852..2939cb122 100644 --- a/docs/Scientific_Computing/Supported_Applications/Synda.md +++ b/docs/Scientific_Computing/Supported_Applications/Synda.md @@ -118,7 +118,7 @@ new  CMIP6.CMIP.NOAA-GFDL.GFDL-ESM4.historical.r1i1p1f1.Omon.tos.gn.v20190726 new  CMIP6.CMIP.NOAA-GFDL.GFDL-ESM4.historical.r1i1p1f1.Omon.tos.gr.v20190726 ``` -Choose one of the datasets. To find out how big the dataset is, type: +Choose one of the datasets. To find out how big the dataset is, type:  ``` sl synda stat CMIP6.CMIP.NOAA-GFDL.GFDL-ESM4.historical.r1i1p1f1.Omon.tos.gr.v20190726 diff --git a/docs/Scientific_Computing/Supported_Applications/TensorFlow on CPUs.md b/docs/Scientific_Computing/Supported_Applications/TensorFlow on CPUs.md index 247d39f90..18d84b4ed 100644 --- a/docs/Scientific_Computing/Supported_Applications/TensorFlow on CPUs.md +++ b/docs/Scientific_Computing/Supported_Applications/TensorFlow on CPUs.md @@ -28,11 +28,11 @@ shorter compared to multicore CPUs. However, running TensorFlow on CPUs can nonetheless be attractive for projects where: - Runtime is dominated by IO, so that computational performance of -GPUs does not provide much advantage with respect to overall runtime -and core-hour charges + GPUs does not provide much advantage with respect to overall runtime + and core-hour charges - The workflow can benefit from parallel execution on many nodes with -large aggregated IO bandwidth (e.g., running an inference task on a -very large dataset, or training a large ensemble of models) + large aggregated IO bandwidth (e.g., running an inference task on a + very large dataset, or training a large ensemble of models) Tests with a machine learning application based on the Inception v3 network for image classification  using a Nvidia P100 GPU and 18 Intel @@ -179,3 +179,4 @@ It depends on your application how beneficial each operator parallelisation strategy is, so it is worth testing different configurations. +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/TensorFlow on GPUs.md b/docs/Scientific_Computing/Supported_Applications/TensorFlow on GPUs.md index 653a5ff0a..56ded9025 100644 --- a/docs/Scientific_Computing/Supported_Applications/TensorFlow on GPUs.md +++ b/docs/Scientific_Computing/Supported_Applications/TensorFlow on GPUs.md @@ -29,15 +29,15 @@ TensorFlow is callable from Python with the numerically intensive parts of the algorithms implemented in C++ for efficiency. This page focus on running TensorFlow with GPU support. !!! prerequisite See also -- To request GPU resources using `--gpus-per-node` option of Slurm, -see the [GPU use on -NeSI](https://support.nesi.org.nz/hc/en-gb/articles/360001471955) -documentation page. -- To run TensorFlow on CPUs instead, have a look at our article -[TensorFlow on -CPUs](https://support.nesi.org.nz/hc/en-gb/articles/360000997675) -for tips on how to configure TensorFlow and Slurm for optimal -performance. + - To request GPU resources using `--gpus-per-node` option of Slurm, + see the [GPU use on + NeSI](https://support.nesi.org.nz/hc/en-gb/articles/360001471955) + documentation page. + - To run TensorFlow on CPUs instead, have a look at our article + [TensorFlow on + CPUs](https://support.nesi.org.nz/hc/en-gb/articles/360000997675) + for tips on how to configure TensorFlow and Slurm for optimal + performance. ## Use NeSI modules @@ -61,7 +61,7 @@ To install additional Python packages for your project, you can either: 1. install packages in your home folder, 2. install packages in a dedicated Python virtual environment for your -project. + project. The first option is easy but will consume space in your home folder and can create conflicts if you have multiple projects with different @@ -106,9 +106,9 @@ Python scripts in a Slurm submission script, using: source /bin/activate ``` !!! prerequisite Virtual environment isolation -Use `export PYTHONNOUSERSITE=1` to ensure that your virtual -environment is isolated from packages installed in your home folder -`~/.local/lib/python3.8/site-packages/`. + Use `export PYTHONNOUSERSITE=1` to ensure that your virtual + environment is isolated from packages installed in your home folder + `~/.local/lib/python3.8/site-packages/`. ## Conda environments @@ -155,37 +155,37 @@ module spider cuDNN Please contact us at if you need a version not available on the platform. !!! prerequisite Note about Māui Ancillary Nodes -- Load the Anaconda3 module instead of Miniconda3 to manipulate -conda environments: -``` sl -module load Anaconda3/2020.02-GCC-7.1.0 -``` -- Use `module avail` to list available versions of modules, e.g. -``` sl -module avail cuDNN -``` + - Load the Anaconda3 module instead of Miniconda3 to manipulate + conda environments: + ``` sl + module load Anaconda3/2020.02-GCC-7.1.0 + ``` + - Use `module avail` to list available versions of modules, e.g. + ``` sl + module avail cuDNN + ``` Additionnally, depending your version of TensorFlow, you may need to take into consideration the following: - install the `tensorflow-gpu` Python package if your are using -TensorFlow 1, + TensorFlow 1, - make sure to use a supported version of Python when creating the -conda environment (e.g. TensorFlow 1.14.0 requires Python 3.3 to -3.7), + conda environment (e.g. TensorFlow 1.14.0 requires Python 3.3 to + 3.7), - use `conda install` (not `pip install`) if your version of -TensorFlow relies on GCC 4.8 (TensorFlow < 1.15). + TensorFlow relies on GCC 4.8 (TensorFlow < 1.15). !!! prerequisite Conda tip -Make sure to use `module purge` before loading Miniconda3, to ensure -that no other Python module is loaded and could interfere with your -conda environment. -``` sl -module purge -module load Miniconda3/4.9.2 -export PYTHONNOUSERSITE=1 -source $(conda info --base)/etc/profile.d/conda.sh # if you didn't use "conda init" to set your .bashrc -conda ... # any conda commands (create, activate, install...) -``` + Make sure to use `module purge` before loading Miniconda3, to ensure + that no other Python module is loaded and could interfere with your + conda environment. + ``` sl + module purge + module load Miniconda3/4.9.2 + export PYTHONNOUSERSITE=1 + source $(conda info --base)/etc/profile.d/conda.sh # if you didn't use "conda init" to set your .bashrc + conda ... # any conda commands (create, activate, install...) + ``` ## Singularity containers @@ -208,13 +208,13 @@ support page. Here are the recommended options to run TensorFlow on the A100 GPUs: - If you use TensorFlow 1, use the TF1 [container provided by -NVIDIA](https://ngc.nvidia.com/catalog/containers/nvidia:tensorflow), -which comes with a version of TensorFlow 1.15 compiled specifically -to support the A100 GPUs (Ampere architecture). Other official -Python packages won't support the A100, triggering various crashes -and slowdowns. + NVIDIA](https://ngc.nvidia.com/catalog/containers/nvidia:tensorflow), + which comes with a version of TensorFlow 1.15 compiled specifically + to support the A100 GPUs (Ampere architecture). Other official + Python packages won't support the A100, triggering various crashes + and slowdowns. - If you use TensorFlow 2, any version from 2.4 and above will provide -support for the A100 GPUs. + support for the A100 GPUs. ## Example Slurm script @@ -227,58 +227,58 @@ make it classify pictures of flowers. This type of task is known as "transfer learning". 1. Create a virtual environment to install the -`tensorflow-hub[make_image_classifier]` package: + `tensorflow-hub[make_image_classifier]` package: -``` sl -module purge # start from a clean environment -module load TensorFlow/2.4.1-gimkl-2020a-Python-3.8.2 -export PYTHONNOUSERSITE=1 -python3 -m venv --system-site-packages tf_hub_venv -source tf_hub_venv/bin/activate -pip install tensorflow-hub[make_image_classifier]~=0.12 -``` + ``` sl + module purge # start from a clean environment + module load TensorFlow/2.4.1-gimkl-2020a-Python-3.8.2 + export PYTHONNOUSERSITE=1 + python3 -m venv --system-site-packages tf_hub_venv + source tf_hub_venv/bin/activate + pip install tensorflow-hub[make_image_classifier]~=0.12 + ``` 2. Download and uncompress the example dataset containing labelled -photos of flowers (daisies, dandelions, roses, sunflowers and -tulips): + photos of flowers (daisies, dandelions, roses, sunflowers and + tulips): -``` sl -wget http://download.tensorflow.org/example_images/flower_photos.tgz -O - | tar -xz -``` + ``` sl + wget http://download.tensorflow.org/example_images/flower_photos.tgz -O - | tar -xz + ``` 3. Copy the following code in a job submission script named -`flowers.sl`: - -``` sl -#!/bin/bash -e -#SBATCH --job-name=flowers-example -#SBATCH --gpus-per-node=1 -#SBATCH --cpus-per-task=2 -#SBATCH --time 00:10:00 -#SBATCH --mem 4G - -# load TensorFlow module and activate the virtual environment -module purge -module load TensorFlow/2.4.1-gimkl-2020a-Python-3.8.2 -export PYTHONNOUSERSITE=1 -source tf_hub_venv/bin/activate - -# select a model to train, here MobileNetV2 -MODEL_URL="https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4" - -# run the training script -make_image_classifier \ ---image_dir flower_photos \ ---tfhub_module "$MODEL_URL" \ ---image_size 224 \ ---saved_model_dir "model-${SLURM_JOBID}" -``` + `flowers.sl`: + + ``` sl + #!/bin/bash -e + #SBATCH --job-name=flowers-example + #SBATCH --gpus-per-node=1 + #SBATCH --cpus-per-task=2 + #SBATCH --time 00:10:00 + #SBATCH --mem 4G + + # load TensorFlow module and activate the virtual environment + module purge + module load TensorFlow/2.4.1-gimkl-2020a-Python-3.8.2 + export PYTHONNOUSERSITE=1 + source tf_hub_venv/bin/activate + + # select a model to train, here MobileNetV2 + MODEL_URL="https://tfhub.dev/google/tf2-preview/mobilenet_v2/feature_vector/4" + + # run the training script + make_image_classifier \ + --image_dir flower_photos \ + --tfhub_module "$MODEL_URL" \ + --image_size 224 \ + --saved_model_dir "model-${SLURM_JOBID}" + ``` 4. Submit the job: -``` sl -sbatch flowers.sl -``` + ``` sl + sbatch flowers.sl + ``` Once the job has finished, the trained model will be saved in a `results-JOBID` folder, where `JOBID` is the Slurm job ID number. @@ -287,9 +287,9 @@ All messages printed by TensorFlow during the training, including training and validation accuracies, are captured in the Slurm output file, named `slurm-JOBID.out` by default. !!! prerequisite Tips -While your job is running, you can monitor the progress of model -training using `tail -f` on the Slurm output file: -``` sl -tail -f slurm-JOBID.out # replace JOBID with an actual number -``` -Press CTRL+C to get the bash prompt back. \ No newline at end of file + While your job is running, you can monitor the progress of model + training using `tail -f` on the Slurm output file: + ``` sl + tail -f slurm-JOBID.out # replace JOBID with an actual number + ``` + Press CTRL+C to get the bash prompt back. \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/Trinity.md b/docs/Scientific_Computing/Supported_Applications/Trinity.md index 658f02c52..81aaf6b43 100644 --- a/docs/Scientific_Computing/Supported_Applications/Trinity.md +++ b/docs/Scientific_Computing/Supported_Applications/Trinity.md @@ -78,11 +78,11 @@ The following Slurm script is a template for running Trinity Phase 1 **Note**  : - `--cpus-per-task` and `--mem` defined in the following example are -just place holders. + just place holders.  - Use a subset of your sample, run a test first to find the -suitable/required amount of CPUs and memory for your dataset - + suitable/required amount of CPUs and memory for your dataset +  ``` sl #!/bin/bash -e @@ -99,22 +99,22 @@ module load Trinity/2.14.0-gimkl-2022a # run trinity, stop before phase 2 srun Trinity --no_distributed_trinity_exec \ ---CPU ${SLURM_CPUS_PER_TASK} --max_memory 200G \ -[your_other_trinity_options] + --CPU ${SLURM_CPUS_PER_TASK} --max_memory 200G \ + [your_other_trinity_options] ``` The extra Trinity arguments are: - `--no_distributed_trinity_exec` tells Trinity to stop before running -Phase 2 + Phase 2 - `--CPU ${SLURM_CPUS_PER_TASK}` tells Trinity to use the number of -CPUs specified by the sbatch option `--cpus-per-task` (i.e. you only -need to update it in one place if you change it) + CPUs specified by the sbatch option `--cpus-per-task` (i.e. you only + need to update it in one place if you change it) - `--max_memory` should be the same (or maybe slightly lower, so you -have a small buffer) than the value specified with the sbatch option -`--mem` + have a small buffer) than the value specified with the sbatch option + `--mem` - `[your_other_trinity_options]` should be replaced with the other -trinity options you would usually use, e.g. `--seqType fq`, etc. + trinity options you would usually use, e.g. `--seqType fq`, etc. ### Running Trinity Phase 2 @@ -161,7 +161,7 @@ gridtype=SLURM # template for a grid submission # make sure: -# --partition is chosen appropriately for the resource requirements +# --partition is chosen appropriately for the resource requirements # (here we choose either large or bigmem, whichever is available first) # --ntasks and --cpus-per-task should always be 1 # --mem may need to be adjusted @@ -184,18 +184,18 @@ max_nodes=100 cmds_per_node=100 ``` -The important details are: + The important details are: - `cmds_per_node` is the size of each batch of commands, i.e. here -each Slurm sub-job runs 100 commands and then exits + each Slurm sub-job runs 100 commands and then exits - `max_nodes` is the number of sub-jobs that can be in the queue at -any given time (each sub-job is single threaded, i.e. it uses just -one core) + any given time (each sub-job is single threaded, i.e. it uses just + one core) - name this file SLURM.conf in the directory you will submit the job -from + from - memory usage may be low enough that the sub-jobs can be run on -either the large or bigmem partitions, which should improve -throughput compared to bigmem alone + either the large or bigmem partitions, which should improve + throughput compared to bigmem alone A template Slurm submission script for Trinity Phase 2 is shown below: @@ -217,16 +217,16 @@ module load HpcGridRunner/20210803 # run Trinity - this will be the master HPC GridRunner process that handles # submitting sub-jobs (batches of commands) to the Slurm queue srun Trinity --CPU ${SLURM_CPUS_PER_TASK} --max_memory 20G \ ---grid_exec "hpc_cmds_GridRunner.pl --grid_conf ${SLURM_SUBMIT_DIR}/SLURM.conf -c" \ -[your_other_trinity_options] + --grid_exec "hpc_cmds_GridRunner.pl --grid_conf ${SLURM_SUBMIT_DIR}/SLURM.conf -c" \ + [your_other_trinity_options] ``` - This assumes that you named the HPC GridRunner configuration script -SLURM.conf and placed it in the same directory that you submit this -job from + SLURM.conf and placed it in the same directory that you submit this + job from - The options `--CPU` and `--max_memory` aren't used by Trinity in -"grid mode" but are still required to be set (i.e. it shouldn't -matter what you set them to) + "grid mode" but are still required to be set (i.e. it shouldn't + matter what you set them to) ## Benchmarks diff --git a/docs/Scientific_Computing/Supported_Applications/TurboVNC.md b/docs/Scientific_Computing/Supported_Applications/TurboVNC.md index c180379c0..3923318f6 100644 --- a/docs/Scientific_Computing/Supported_Applications/TurboVNC.md +++ b/docs/Scientific_Computing/Supported_Applications/TurboVNC.md @@ -19,7 +19,7 @@ zendesk_section_id: 360000040076 [//]: <> (^^^^^^^^^^^^^^^^^^^^) [//]: <> (REMOVE ME IF PAGE VALIDATED) - +  ## Setup @@ -28,7 +28,7 @@ zendesk_section_id: 360000040076 You will also need java runtime ``` sl -sudo apt install -y openjdk-11-jre +sudo apt install -y openjdk-11-jre  ``` #### 1. Download @@ -40,7 +40,7 @@ Download TurboVNC here. On Ubuntu, you can install the vnc-java package, e.g.: ``` sl -sudo apt install vnc-java +sudo apt install vnc-java  ``` Do not use gvncviewer, as it doesn't allow to connect to a VNC server @@ -81,11 +81,11 @@ module load TurboVNC vncserver ``` !!! prerequisite Warning -Do not use `-securitytypes none` as an argument to `vncserver`! If you -do so, anyone who has a cluster login and knows how to find a VNC -server in the list of processes can connect to your VNC server and -impersonate you. **You are responsible for anything done on the -cluster under your user account.** + Do not use `-securitytypes none` as an argument to `vncserver`! If you + do so, anyone who has a cluster login and knows how to find a VNC + server in the list of processes can connect to your VNC server and + impersonate you. **You are responsible for anything done on the + cluster under your user account.** You will receive a message @@ -104,113 +104,113 @@ will be 5902; and so on. 1. Open the TurboVNC viewer: -``` sl -vncviewer -``` + ``` sl + vncviewer + ``` 2. Within the TurboVNC viewer, connect to the host and display number, -e.g. to `mahuika01.mahuika.nesi.org.nz:1`. Alternatively, use the -host and port number: `mahuika01.mahuika.nesi.org.nz::5901` (note -the two colons between hostname and port number). + e.g. to `mahuika01.mahuika.nesi.org.nz:1`. Alternatively, use the + host and port number: `mahuika01.mahuika.nesi.org.nz::5901` (note + the two colons between hostname and port number). #### Outside the NIWA network, and not on the NIWA VPN 1. Open an SSH tunnel through the lander node to the SSH port (22) on -the desired login node: - -``` sl -# This command sets up local SSH port forwarding. -# The command is of the form: -# ssh -L :: -# -# We can't use aliases (mahuika, maui) or load-balancing hostnames -# (login.mahuika.nesi.org.nz, login.maui.nesi.org.nz) because those run the risk -# of connecting to the wrong host, e.g. connecting to mahuika02 when the TurboVNC -# server is running on mahuika01. -# -# Also, the hostname of destination_host is as seen from gateway_host, -# not from your local workstation. -# -# The choice of local port is yours, but you may find the following convention -# useful: -# -# TurboVNC server on mahuika01 => local_port = destination_port + 10,000 -# TurboVNC server on mahuika02 => local_port = destination_port + 20,000 -# TurboVNC server on maui01 => local_port = destination_port + 30,000 -# TurboVNC server on maui02 => local_port = destination_port + 40,000 -# -# Following this convention, for a connection via the SSH server on mahuika01, -# such that the destination port is 22: -# -# local_port = 22 + 10000 = 10022 -# -ssh -L 10022:mahuika01.mahuika.nesi.org.nz:22 -N lander -``` + the desired login node: + + ``` sl + # This command sets up local SSH port forwarding. + # The command is of the form: + # ssh -L :: + # + # We can't use aliases (mahuika, maui) or load-balancing hostnames + # (login.mahuika.nesi.org.nz, login.maui.nesi.org.nz) because those run the risk + # of connecting to the wrong host, e.g. connecting to mahuika02 when the TurboVNC + # server is running on mahuika01. + # + # Also, the hostname of destination_host is as seen from gateway_host, + # not from your local workstation. + # + # The choice of local port is yours, but you may find the following convention + # useful: + # + # TurboVNC server on mahuika01 => local_port = destination_port + 10,000 + # TurboVNC server on mahuika02 => local_port = destination_port + 20,000 + # TurboVNC server on maui01 => local_port = destination_port + 30,000 + # TurboVNC server on maui02 => local_port = destination_port + 40,000 + # + # Following this convention, for a connection via the SSH server on mahuika01, + # such that the destination port is 22: + # + # local_port = 22 + 10000 = 10022 + # + ssh -L 10022:mahuika01.mahuika.nesi.org.nz:22 -N lander + ``` 2. In a new terminal open an SSH tunnel from the already open tunnel to -the desired TurboVNC port: - -``` sl -# This command sets up local SSH port forwarding. -# The command is of the form: -# ssh -L :: -# -# The hostname of destination_host is as seen from gateway_host, -# not from your local workstation. But with the above tunnel set up, -# anything done on (for example) port 10022 on localhost is seen as if -# it were done directly on mahuika01. -# -# The choice of local port is yours, but you may find the following convention -# useful: -# -# TurboVNC server on mahuika01 => local_port = destination_port + 10,000 -# TurboVNC server on mahuika02 => local_port = destination_port + 20,000 -# TurboVNC server on maui01 => local_port = destination_port + 30,000 -# TurboVNC server on maui02 => local_port = destination_port + 40,000 -# -# Following this convention, for a connection to a TurboVNC running on -# display 1 on mahuika01, such that the destination port is 5901: -# -# local_port = 5901 + 10000 = 15901 -# -# The rationale for not using local ports 5901, 5902 etc., is that we -# want you to be able to run a VNC server on your own machine if you -# wish. Using the same (local) port as a TurboVNC server would want -# to use will potentially cause problems. -# -# Because the traffic is sent to port 10022 on localhost, which is -# forwarded to port 22 on mahuika01, the first "localhost" (between -# 15901 and 5901) is localhost as seen from mahuika01, i.e. it is -# mahuika01. The second localhost is your local workstation. But -# you have to use your NeSI Linux username, not your local Linux -# username, to authenticate. Clear as mud? -ssh -L 15901:localhost:5901 -N -p 10022 -l my_nesi_linux_username localhost -``` - -As an alternative to steps 1 and 2, if using MobaXTerm in Windows, -set up and then start port forwarding connections to look like -this: -![2020-02-10\_TurboVNC\_MobaXTerm\_ssh\_tunnel\_setup.png](../../assets/images/TurboVNC.png) - -- The tunnel through the lander node must be started before the -tunnel through localhost can be started. -- The destination server for the tunnel through the lander node -must be the NeSI login node where your TurboVNC server is -running. -- The destination port for the second tunnel must be the port -corresponding to your display number: `5901` for display -1, `5902` for display 2, and so forth. + the desired TurboVNC port: + + ``` sl + # This command sets up local SSH port forwarding. + # The command is of the form: + # ssh -L :: + # + # The hostname of destination_host is as seen from gateway_host, + # not from your local workstation. But with the above tunnel set up, + # anything done on (for example) port 10022 on localhost is seen as if + # it were done directly on mahuika01. + # + # The choice of local port is yours, but you may find the following convention + # useful: + # + # TurboVNC server on mahuika01 => local_port = destination_port + 10,000 + # TurboVNC server on mahuika02 => local_port = destination_port + 20,000 + # TurboVNC server on maui01 => local_port = destination_port + 30,000 + # TurboVNC server on maui02 => local_port = destination_port + 40,000 + # + # Following this convention, for a connection to a TurboVNC running on + # display 1 on mahuika01, such that the destination port is 5901: + # + # local_port = 5901 + 10000 = 15901 + # + # The rationale for not using local ports 5901, 5902 etc., is that we + # want you to be able to run a VNC server on your own machine if you + # wish. Using the same (local) port as a TurboVNC server would want + # to use will potentially cause problems. + # + # Because the traffic is sent to port 10022 on localhost, which is + # forwarded to port 22 on mahuika01, the first "localhost" (between + # 15901 and 5901) is localhost as seen from mahuika01, i.e. it is + # mahuika01. The second localhost is your local workstation. But + # you have to use your NeSI Linux username, not your local Linux + # username, to authenticate. Clear as mud? + ssh -L 15901:localhost:5901 -N -p 10022 -l my_nesi_linux_username localhost + ``` + + As an alternative to steps 1 and 2, if using MobaXTerm in Windows, + set up and then start port forwarding connections to look like + this: + ![2020-02-10\_TurboVNC\_MobaXTerm\_ssh\_tunnel\_setup.png](../../assets/images/TurboVNC.png) + + - The tunnel through the lander node must be started before the + tunnel through localhost can be started. + - The destination server for the tunnel through the lander node + must be the NeSI login node where your TurboVNC server is + running. + - The destination port for the second tunnel must be the port + corresponding to your display number: `5901` for display + 1, `5902` for display 2, and so forth. 3. Open the VNC viewer: -- From the Ubuntu command line: -`vncviewer localhost::` (e.g. -`vncviewer localhost::15901`) -- On Windows: Select TurboVNC Viewer from the Start menu (or use -an equivalent option), and enter `localhost::` (e.g. -`vncviewer localhost::15901`) at the dialog + - From the Ubuntu command line: + `vncviewer localhost::` (e.g. + `vncviewer localhost::15901`) + - On Windows: Select TurboVNC Viewer from the Start menu (or use + an equivalent option), and enter `localhost::` (e.g. + `vncviewer localhost::15901`) at the dialog 4. If prompted for a password, click the button to enter an empty -password + password ### Putting your TurboVNC client in fullscreen mode @@ -229,18 +229,18 @@ before you close the first tunnel. ### Stopping the server 1. Go to your tmux session on the server, or (alternatively) go to or -open some other session on that server. If you use a different -session, you will have to load the TurboVNC module if it's not -already loaded. + open some other session on that server. If you use a different + session, you will have to load the TurboVNC module if it's not + already loaded. 2. Remind yourself of your TurboVNC display number. 3. Run the following command: -``` sl -# Example: vncserver -kill :1 -vncserver -kill : -``` + ``` sl + # Example: vncserver -kill :1 + vncserver -kill : + ``` ### Finding open TurboVNC servers diff --git a/docs/Scientific_Computing/Supported_Applications/VASP.md b/docs/Scientific_Computing/Supported_Applications/VASP.md index 7590d38da..1c883be2e 100644 --- a/docs/Scientific_Computing/Supported_Applications/VASP.md +++ b/docs/Scientific_Computing/Supported_Applications/VASP.md @@ -110,7 +110,7 @@ or, equivalently, as shown in our example script above. - +  ### How many cores should I request? @@ -305,7 +305,7 @@ corresponds to only exchange and not to exchange and correlation." For more information on correct usage of LIBXC please see[VASP's documentation](https://www.vasp.at/wiki/index.php/LIBXC1) on this. - +  ### Which VASP executable should I use? @@ -356,54 +356,54 @@ details about the available GPUs on NeSI Here are some additional notes specific to running VASP on GPUs on NeSI: - - The command that you use to run VASP does not change - unlike -the previous CUDA version, which had a `vasp_gpu` executable, -with the OpenACC version the usual VASP executables (`vasp_std`, -`vasp_gam`, `vasp_ncl`) are all built with OpenACC GPU support -in the *\*-NVHPC-\** modules, so just use those as usual -- Always select one MPI process (Slurm task) per GPU, for example: -- Running on 1 P100 GPU - -``` sl -# snippet of Slurm script -#SBATCH --nodes=1 -#SBATCH --ntasks-per-node=1 # 1 task per node as we set 1 GPU per node below -#SBATCH --cpus-per-task=1 -#SBATCH --gpus-per-node=P100:1 -# end snippet -``` - -- Running on 4 HGX A100 GPUs on a single node - -``` sl -# snippet of Slurm script -#SBATCH --nodes=1 -#SBATCH --ntasks-per-node=4 # 4 tasks per node as we set 4 GPUs per node below -#SBATCH --cpus-per-task=1 -#SBATCH --gpus-per-node=A100:4 -#SBATCH --partition=hgx # required to get the HGX A100s instead of PCI A100s -# end snippet -``` -- Multiple threads per MPI process (--cpus-per-task) might be -beneficial for performance but you should start by setting this -to 1 to get a baseline -- VASP will scale better across multiple GPUs when they are all on -the same node compared to across multiple nodes -- if you see memory errors like -`call to cuMemAlloc returned error 2: Out of memory` you -probably ran out of GPU memory. You could try requesting more -GPUs (so the total amount of available memory is higher) and/or -moving to GPUs with more memory (note: GPU memory is distinct -from the usual memory you have to request for your job via -`#SBATCH --mem` or similar; when you are allocated a GPU you get -access to all the GPU memory on that device) -- P100 GPUs have 12 GB GPU memory and you can have a maximum -of 2 per node -- PCI A100 GPUs have 40 GB GPU memory and you can have a -maximum of 2 per node -- HGX A100 GPUs have 80 GB GPU memory and you can have a -maximum of 4 per node -- the HGX GPUs have a faster interconnect between the GPUs within -a single node; if using multiple GPUs you may get better -performance with the HGX A100s than with the PCI A100s -- A100 GPUs have more compute power than P100s so will perform -better if your simulation can take advantage of the extra power \ No newline at end of file + the previous CUDA version, which had a `vasp_gpu` executable, + with the OpenACC version the usual VASP executables (`vasp_std`, + `vasp_gam`, `vasp_ncl`) are all built with OpenACC GPU support + in the *\*-NVHPC-\** modules, so just use those as usual + - Always select one MPI process (Slurm task) per GPU, for example: + - Running on 1 P100 GPU + + ``` sl + # snippet of Slurm script + #SBATCH --nodes=1 + #SBATCH --ntasks-per-node=1 # 1 task per node as we set 1 GPU per node below + #SBATCH --cpus-per-task=1 + #SBATCH --gpus-per-node=P100:1 + # end snippet + ``` + + - Running on 4 HGX A100 GPUs on a single node + + ``` sl + # snippet of Slurm script + #SBATCH --nodes=1 + #SBATCH --ntasks-per-node=4 # 4 tasks per node as we set 4 GPUs per node below + #SBATCH --cpus-per-task=1 + #SBATCH --gpus-per-node=A100:4 + #SBATCH --partition=hgx # required to get the HGX A100s instead of PCI A100s + # end snippet + ``` + - Multiple threads per MPI process (--cpus-per-task) might be + beneficial for performance but you should start by setting this + to 1 to get a baseline + - VASP will scale better across multiple GPUs when they are all on + the same node compared to across multiple nodes + - if you see memory errors like + `call to cuMemAlloc returned error 2: Out of memory` you + probably ran out of GPU memory. You could try requesting more + GPUs (so the total amount of available memory is higher) and/or + moving to GPUs with more memory (note: GPU memory is distinct + from the usual memory you have to request for your job via + `#SBATCH --mem` or similar; when you are allocated a GPU you get + access to all the GPU memory on that device) + - P100 GPUs have 12 GB GPU memory and you can have a maximum + of 2 per node + - PCI A100 GPUs have 40 GB GPU memory and you can have a + maximum of 2 per node + - HGX A100 GPUs have 80 GB GPU memory and you can have a + maximum of 4 per node + - the HGX GPUs have a faster interconnect between the GPUs within + a single node; if using multiple GPUs you may get better + performance with the HGX A100s than with the PCI A100s + - A100 GPUs have more compute power than P100s so will perform + better if your simulation can take advantage of the extra power \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/VirSorter.md b/docs/Scientific_Computing/Supported_Applications/VirSorter.md index 3d2663df1..4558bafed 100644 --- a/docs/Scientific_Computing/Supported_Applications/VirSorter.md +++ b/docs/Scientific_Computing/Supported_Applications/VirSorter.md @@ -42,10 +42,11 @@ provided to Slurm jobs would be: ``` sl module load VirSorter/2.1-gimkl-2020a-Python-3.8.2 virsorter run \ ---seqfile test.fasta \ ---jobs ${SLURM_CPUS_PER_TASK:-2} \ ---rm-tmpdir \ -all \ ---config LOCAL_SCRATCH=${TMPDIR:-/tmp} + --seqfile test.fasta \ + --jobs ${SLURM_CPUS_PER_TASK:-2} \ + --rm-tmpdir \ + all \ + --config LOCAL_SCRATCH=${TMPDIR:-/tmp} ``` +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/WRF.md b/docs/Scientific_Computing/Supported_Applications/WRF.md index 8619ba1ba..aa2c2cf4d 100644 --- a/docs/Scientific_Computing/Supported_Applications/WRF.md +++ b/docs/Scientific_Computing/Supported_Applications/WRF.md @@ -29,7 +29,7 @@ architecture supporting parallel computation and system extensibility. The model serves a wide range of meteorological applications across scales from tens of meters to thousands of kilometres. - +  Download WRF: @@ -48,15 +48,15 @@ installed. On Māui, these are available as modules. On Mahuika, we recommend to download these packages and build them by hand (instructions are provided below). - +  ## WRF on Mahuika - +  ### Environment on Mahuika -We'll use the Intel compiler and Intel MPI library. +We'll use the Intel compiler and Intel MPI library.  ``` sl module purge @@ -67,7 +67,7 @@ Although NeSI has NetCDF modules installed, WRF wants the C and Fortran NetCDF libraries, include files and modules all installed under the same root directory. Hence we build those by hand. - +  ### Building WRF dependencies on Mahuika @@ -126,7 +126,7 @@ cd .. cd .. ``` - +  Then proceed to configure WRF by setting @@ -149,13 +149,13 @@ and build the code with ``` This may take several hours to compile. Check the log file to ensure -that the compilation was successful. - +that the compilation was successful.  +  ### Running WRF on Mahuika - +  An example Slurm script for running WRF on Mahuika extension, which can be submitted with *sbatch name\_of\_script.sl*: @@ -182,7 +182,7 @@ srun --output=wrf.log ./wrf.exe - +  ## WRF on Māui @@ -332,7 +332,7 @@ Māui compute nodes. However, *ungrib* is serial and should not be run on a compute node unless it is very quick to finish. Alternatively you could run *ungrib* on an interactive/login node if it will not take up many resources, or you could compile WRF and WPS on a Māui Ancillary -node and run it there. +node and run it there.  Note that WPS does a lot of file IO and therefore probably won't scale up to as many processes as WRF. diff --git a/docs/Scientific_Computing/Supported_Applications/ipyrad.md b/docs/Scientific_Computing/Supported_Applications/ipyrad.md index 870a0c615..4ace7a51f 100644 --- a/docs/Scientific_Computing/Supported_Applications/ipyrad.md +++ b/docs/Scientific_Computing/Supported_Applications/ipyrad.md @@ -40,8 +40,8 @@ GPLv3 ### Getting Started Following **example** uses  rad\_example which can be downloaded as per -instructions on - +instructions on  +  ``` sl $ curl -LkO https://eaton-lab.org/data/ipsimdata.tar.gz @@ -62,7 +62,7 @@ New file 'params-data1.txt' created in ........ `params-data1.txt` will be created on current working directory. Review and edit the paths in parameter file to match the destinations of input -data, barcode paths,etc. +data, barcode paths,etc.  ### Slurm Script for Using Multiple CPUs a Single Compute Node @@ -93,6 +93,6 @@ cd $jobdir ## call ipyrad on your params file and perform 7 steps from the workflow -srun ipyrad -p $params -s 12 --force +srun ipyrad -p $params -s 12 --force ``` \ No newline at end of file diff --git a/docs/Scientific_Computing/Supported_Applications/ont-guppy-gpu.md b/docs/Scientific_Computing/Supported_Applications/ont-guppy-gpu.md index 7ea3e14cc..1491275cc 100644 --- a/docs/Scientific_Computing/Supported_Applications/ont-guppy-gpu.md +++ b/docs/Scientific_Computing/Supported_Applications/ont-guppy-gpu.md @@ -35,7 +35,7 @@ probabilities. including all functional specifications associated therewith made available to the Oxford Group’s customers on the Oxford Group’s websites, as amended from time to time (the “Base Caller -Documentation”), designed to convert certain Instrument +Documentation”), designed to convert certain Instrument Data to Biological Data, as may be made available to Customers by Oxford, whether free of charge or for a fee. @@ -45,18 +45,18 @@ https://community.nanoporetech.com/ ### Example Slurm script - Following Slurm script is a template to run Basecalling on NVIDIA -P100 GPUs.( We do not recommend running Guppy jobs on CPUs ) + P100 GPUs.( We do not recommend running Guppy jobs on CPUs ) - `--device auto` will automatically pick up the GPU over CPU - Also,  NeSI Mahuika cluster can provide A100 GPUs  which can be 5-6 -times faster than P100 GPUs for Guppy Basecalling with  version. 5 -and above. This can be requested with -`#SBATCH --gpus-per-node A100:1` variable + times faster than P100 GPUs for Guppy Basecalling with  version. 5 + and above. This can be requested with + `#SBATCH --gpus-per-node A100:1` variable - Config files are stored in -***/opt/nesi/CS400\_centos7\_bdw/ont-guppy-gpu/(version)/data/ *** -with read permissions to all researchers (replace ***(version)*** -with the version of the module) - + ***/opt/nesi/CS400\_centos7\_bdw/ont-guppy-gpu/(version)/data/ *** + with read permissions to all researchers (replace ***(version)*** + with the version of the module) +  ``` sl #!/bin/bash -e diff --git a/docs/Scientific_Computing/Supported_Applications/snpEff.md b/docs/Scientific_Computing/Supported_Applications/snpEff.md index b83b35aeb..068dfa5a4 100644 --- a/docs/Scientific_Computing/Supported_Applications/snpEff.md +++ b/docs/Scientific_Computing/Supported_Applications/snpEff.md @@ -28,7 +28,7 @@ zendesk_section_id: 360000040076 snpEff is a genetic variant annotation, and functional effect prediction tool. - +  ## Configuration File @@ -39,27 +39,27 @@ required for snpEff. 1. Load the latest version of the `snpEff` module. 2. Make a copy of the snpEff config file, replacing -<project\_id>, with your project ID. + <project\_id>, with your project ID. -``` sl -cp $EBROOTSNPEFF/snpEff.config /nesi/project//my_snpEff.config -``` + ``` sl + cp $EBROOTSNPEFF/snpEff.config /nesi/project//my_snpEff.config + ``` 3. Open the`my_snpEff.config` file, and edit **line 17** from the top -to point to a preferred path within your project directory or home -directory, e.g., edit line 17 `data.dir = ./data/` to something -like:`data.dir =/nesi/project/` -Please note that you must have read and write permissions to this -directory. + to point to a preferred path within your project directory or home + directory, e.g., edit line 17 `data.dir = ./data/` to something + like:`data.dir =/nesi/project/` + Please note that you must have read and write permissions to this + directory. 4. Run `snpEff.jar` using the `-c` flag to point to your new config -file, e.g., `-c path/to/snpEff/my_snpEff.config` For example: - -``` sl -java -jar $EBROOTSNPEFF/snpEff.jar -c /nesi/project//my_snpEff.config -``` + file, e.g., `-c path/to/snpEff/my_snpEff.config` For example: + ``` sl + java -jar $EBROOTSNPEFF/snpEff.jar -c /nesi/project//my_snpEff.config + ``` +  ## Example Script @@ -87,3 +87,4 @@ java -jar $EBROOTSNPEFF/snpEff.jar -h java -jar $EBROOTSNPEFF/snpEff.jar -c /nesi/project//my_snpEff.config ``` +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Terminal_Setup/Git_Bash_Windows.md b/docs/Scientific_Computing/Terminal_Setup/Git_Bash_Windows.md index 05b49e5da..b8ea88ce9 100644 --- a/docs/Scientific_Computing/Terminal_Setup/Git_Bash_Windows.md +++ b/docs/Scientific_Computing/Terminal_Setup/Git_Bash_Windows.md @@ -20,10 +20,10 @@ zendesk_section_id: 360000189696 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Requirements -- Have a [NeSI -account.](https://support.nesi.org.nz/hc/en-gb/articles/360000159715-Creating-a-NeSI-Account) -- Be a member of an [active -project.](https://support.nesi.org.nz/hc/en-gb/articles/360000693896-Applying-to-join-a-NeSI-project) + - Have a [NeSI + account.](https://support.nesi.org.nz/hc/en-gb/articles/360000159715-Creating-a-NeSI-Account) + - Be a member of an [active + project.](https://support.nesi.org.nz/hc/en-gb/articles/360000693896-Applying-to-join-a-NeSI-project) ## First time setup @@ -33,45 +33,45 @@ Git Bash can be downloaded as part of Git The login process can be simplified with a few configurations. 1. Open Git Bash and run `nano ~/.ssh/config` to open your ssh config -file and add the following (replacing `` with your -username): - -``` sl -Host mahuika -User -Hostname login.mahuika.nesi.org.nz -ProxyCommand ssh -W %h:%p lander -ForwardX11 yes -ForwardX11Trusted yes -ServerAliveInterval 300 -ServerAliveCountMax 2 - -Host maui -User -Hostname login.maui.nesi.org.nz -ProxyCommand ssh -W %h:%p lander -ForwardX11 yes -ForwardX11Trusted yes -ServerAliveInterval 300 -ServerAliveCountMax 2 - -Host lander -User -HostName lander.nesi.org.nz -ForwardX11 yes -ForwardX11Trusted yes -ServerAliveInterval 300 -ServerAliveCountMax 2 - -Host * -ControlMaster auto -ControlPersist 1 -``` - -Close and save with ctrl x, y, Enter + file and add the following (replacing `` with your + username): + + ``` sl + Host mahuika + User + Hostname login.mahuika.nesi.org.nz + ProxyCommand ssh -W %h:%p lander + ForwardX11 yes + ForwardX11Trusted yes + ServerAliveInterval 300 + ServerAliveCountMax 2 + + Host maui + User + Hostname login.maui.nesi.org.nz + ProxyCommand ssh -W %h:%p lander + ForwardX11 yes + ForwardX11Trusted yes + ServerAliveInterval 300 + ServerAliveCountMax 2 + + Host lander + User + HostName lander.nesi.org.nz + ForwardX11 yes + ForwardX11Trusted yes + ServerAliveInterval 300 + ServerAliveCountMax 2 + + Host * + ControlMaster auto + ControlPersist 1 + ``` + + Close and save with ctrl x, y, Enter 2. Ensure the permissions are correct by -running `chmod 600 ~/.ssh/config`. + running `chmod 600 ~/.ssh/config`. ## Usage diff --git a/docs/Scientific_Computing/Terminal_Setup/MobaXterm_Setup_Windows.md b/docs/Scientific_Computing/Terminal_Setup/MobaXterm_Setup_Windows.md index c04dcda3d..81ac44c23 100644 --- a/docs/Scientific_Computing/Terminal_Setup/MobaXterm_Setup_Windows.md +++ b/docs/Scientific_Computing/Terminal_Setup/MobaXterm_Setup_Windows.md @@ -20,57 +20,57 @@ zendesk_section_id: 360000189696 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Requirements -- Have an [active account and -project.](https://support.nesi.org.nz/hc/en-gb/sections/360000196195-Accounts-Projects) -- Set up your [Linux -Password.](https://support.nesi.org.nz/hc/en-gb/articles/360000335995) -- Set up Second [Factor -Authentication.](https://support.nesi.org.nz/hc/en-gb/articles/360000203075) -- Windows operating system. + - Have an [active account and + project.](https://support.nesi.org.nz/hc/en-gb/sections/360000196195-Accounts-Projects) + - Set up your [Linux + Password.](https://support.nesi.org.nz/hc/en-gb/articles/360000335995) + - Set up Second [Factor + Authentication.](https://support.nesi.org.nz/hc/en-gb/articles/360000203075) + - Windows operating system. Setting up MobaXterm as shown below will allow you to connect to the Cluster with less keyboard inputs as well as allow use of the file transfer GUI. 1. Download MobaXTerm -[here](https://mobaxterm.mobatek.net/download-home-edition.html) -- Use the Portable Edition if you don't have administrator rights -on your machine. This is the recommended way for NIWA -researchers. -- Otherwise, choose freely the Portable or Installer Edition. + [here](https://mobaxterm.mobatek.net/download-home-edition.html) + - Use the Portable Edition if you don't have administrator rights + on your machine. This is the recommended way for NIWA + researchers. + - Otherwise, choose freely the Portable or Installer Edition. 2. To set up a session, Click 'Session' in the top left corner: 3. In "SSH", -- Set the remote host to `login.mahuika.nesi.org.nz` for Mahuika -users or `login.maui.nesi.org.nz` for Māui users. -- Enable the "Specify username" option and put your Username in -the corresponding box. + - Set the remote host to `login.mahuika.nesi.org.nz` for Mahuika + users or `login.maui.nesi.org.nz` for Māui users. + - Enable the "Specify username" option and put your Username in + the corresponding box. 4. In the "Advanced SSH settings" -- Set SSH-browser type to '**SCP (enhanced speed)**'. -- Optionally, tick the 'Follow SSH path' button. + - Set SSH-browser type to '**SCP (enhanced speed)**'. + - Optionally, tick the 'Follow SSH path' button. 1. In the “Network settings” tab: -- Select "SSH gateway (jump host)" to open a popup window -- In this window enter `lander.nesi.org.nz` in the “Gateway host” -field, as well as your NeSI username in the Username field for -the gateway SSH server then select OK to close the window. + - Select "SSH gateway (jump host)" to open a popup window + - In this window enter `lander.nesi.org.nz` in the “Gateway host” + field, as well as your NeSI username in the Username field for + the gateway SSH server then select OK to close the window. ![mceclip4.png](../../assets/images/MobaXterm_Setup_Windows.png) ![mceclip5.png](../../assets/images/MobaXterm_Setup_Windows_0.png) 1. Click 'OK' on the open window, usually this will start a new session -immediately. *See usage below.* + immediately. *See usage below.* !!! prerequisite WARNING -There is a bug which causes some users to be repeatedly prompted -`@lander.nesi.org.nz's password:` -This can be resolved by clicking "OK" each time you are prompted then -logging in as normal once you are prompted for your `First Factor:` or -`Password:`. -See [Login -Troubleshooting](https://support.nesi.org.nz/hc/en-gb/articles/360000570215) -for more details + There is a bug which causes some users to be repeatedly prompted + `@lander.nesi.org.nz's password:` + This can be resolved by clicking "OK" each time you are prompted then + logging in as normal once you are prompted for your `First Factor:` or + `Password:`. + See [Login + Troubleshooting](https://support.nesi.org.nz/hc/en-gb/articles/360000570215) + for more details ## Usage @@ -111,8 +111,8 @@ Māui users must enter their password combined with their second factor. For example, if your password is "Password" and your current second factor is "123456" then you must enter "Password123456". !!! prerequisite Tip -If you choose to save your password, the process will be the same -minus the prompts for First Factor. + If you choose to save your password, the process will be the same + minus the prompts for First Factor. ## Credential Manager @@ -130,18 +130,18 @@ management system for saved session. Two steps to try: - Remove any previously saved sessions either related to `lander` OR -`mahuika` from sessions panel on the left + `mahuika` from sessions panel on the left - Access MobaXterm password management system as below and remove -saved credentials -- Go to **Settings**->**Configuration** and go to the -**General** tab and click on **MobaXterm password management** -- You will see the saved sessions for `lander` (and perhaps -`mahuika` as well). I recommend removing all of it and restart -MobaXterm before the next login attempt + saved credentials + - Go to **Settings**->**Configuration** and go to the + **General** tab and click on **MobaXterm password management** + - You will see the saved sessions for `lander` (and perhaps + `mahuika` as well). I recommend removing all of it and restart + MobaXterm before the next login attempt Then setup a new session [according to the support doc instructions](https://support.nesi.org.nz/hc/en-gb/articles/360000624696-MobaXterm-Setup-Windows-) as before. !!! prerequisite What Next? -- [Moving files to/from a -cluster.](https://support.nesi.org.nz/hc/en-gb/articles/360000578455) \ No newline at end of file + - [Moving files to/from a + cluster.](https://support.nesi.org.nz/hc/en-gb/articles/360000578455) \ No newline at end of file diff --git a/docs/Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md b/docs/Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md index 6c1524fb8..7c939c4f5 100644 --- a/docs/Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md +++ b/docs/Scientific_Computing/Terminal_Setup/Standard_Terminal_Setup.md @@ -22,16 +22,16 @@ zendesk_section_id: 360000189696 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Requirements -- Have an [active account and -project.](https://support.nesi.org.nz/hc/en-gb/sections/360000196195-Accounts-Projects) -- Set up your [Linux -Password.](https://support.nesi.org.nz/hc/en-gb/articles/360000335995) -- Set up Second [Factor -Authentication.](https://support.nesi.org.nz/hc/en-gb/articles/360000203075) -- Using standard Linux/Mac terminal *or* [Windows Subsystem for -Linux](https://support.nesi.org.nz/hc/en-gb/articles/360001075575) -with [Ubuntu -terminal](https://support.nesi.org.nz/hc/en-gb/articles/360001050575). + - Have an [active account and + project.](https://support.nesi.org.nz/hc/en-gb/sections/360000196195-Accounts-Projects) + - Set up your [Linux + Password.](https://support.nesi.org.nz/hc/en-gb/articles/360000335995) + - Set up Second [Factor + Authentication.](https://support.nesi.org.nz/hc/en-gb/articles/360000203075) + - Using standard Linux/Mac terminal *or* [Windows Subsystem for + Linux](https://support.nesi.org.nz/hc/en-gb/articles/360001075575) + with [Ubuntu + terminal](https://support.nesi.org.nz/hc/en-gb/articles/360001050575). ## First time setup @@ -39,49 +39,49 @@ The login process can be simplified significantly with a few easy configurations. 1. In a new local terminal run; `mkdir -p ~/.ssh/sockets` this will -create a hidden file in your home directory to store socket -configurations. + create a hidden file in your home directory to store socket + configurations. 2. Open your ssh config file with  `nano ~/.ssh/config` and add the -following (replacing **`username`** with your username): - -``` sl -Host mahuika -User username -Hostname login.mahuika.nesi.org.nz -ProxyCommand ssh -W %h:%p lander -ForwardX11 yes -ForwardX11Trusted yes -ServerAliveInterval 300 -ServerAliveCountMax 2 - -Host maui -User username -Hostname login.maui.nesi.org.nz -ProxyCommand ssh -W %h:%p lander -ForwardX11 yes -ForwardX11Trusted yes -ServerAliveInterval 300 -ServerAliveCountMax 2 - -Host lander -User username -HostName lander.nesi.org.nz -ForwardX11 yes -ForwardX11Trusted yes -ServerAliveInterval 300 -ServerAliveCountMax 2 - -Host * -ControlMaster auto -ControlPath ~/.ssh/sockets/ssh_mux_%h_%p_%r -ControlPersist 1 -``` - -Close and save with ctrl x, y, Enter + following (replacing **`username`** with your username): + + ``` sl + Host mahuika + User username + Hostname login.mahuika.nesi.org.nz + ProxyCommand ssh -W %h:%p lander + ForwardX11 yes + ForwardX11Trusted yes + ServerAliveInterval 300 + ServerAliveCountMax 2 + + Host maui + User username + Hostname login.maui.nesi.org.nz + ProxyCommand ssh -W %h:%p lander + ForwardX11 yes + ForwardX11Trusted yes + ServerAliveInterval 300 + ServerAliveCountMax 2 + + Host lander + User username + HostName lander.nesi.org.nz + ForwardX11 yes + ForwardX11Trusted yes + ServerAliveInterval 300 + ServerAliveCountMax 2 + + Host * + ControlMaster auto + ControlPath ~/.ssh/sockets/ssh_mux_%h_%p_%r + ControlPersist 1 + ``` + + Close and save with ctrl x, y, Enter 3. Ensure the permissions are correct by -running `chmod 600 ~/.ssh/config`. + running `chmod 600 ~/.ssh/config`. ## Usage @@ -108,7 +108,7 @@ scp mahuika:~/ (For more info visit [data transfer](https://support.nesi.org.nz/hc/en-gb/articles/360000578455-File-Transfer-with-SCP)). !!! prerequisite What Next? -- [Moving files to/from a -cluster.](https://support.nesi.org.nz/hc/en-gb/articles/360000578455) -- Setting up a -[X-Server](https://support.nesi.org.nz/hc/en-gb/articles/360001075975) (optional). \ No newline at end of file + - [Moving files to/from a + cluster.](https://support.nesi.org.nz/hc/en-gb/articles/360000578455) + - Setting up a + [X-Server](https://support.nesi.org.nz/hc/en-gb/articles/360001075975) (optional). \ No newline at end of file diff --git a/docs/Scientific_Computing/Terminal_Setup/Ubuntu_LTS_terminal_Windows_10.md b/docs/Scientific_Computing/Terminal_Setup/Ubuntu_LTS_terminal_Windows_10.md index ffd4c7d96..c587329ce 100644 --- a/docs/Scientific_Computing/Terminal_Setup/Ubuntu_LTS_terminal_Windows_10.md +++ b/docs/Scientific_Computing/Terminal_Setup/Ubuntu_LTS_terminal_Windows_10.md @@ -20,56 +20,56 @@ zendesk_section_id: 360000189696 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Requirements -- Be a [member of an active -project.](https://support.nesi.org.nz/hc/en-gb/articles/360000693896-Applying-to-join-a-NeSI-project) -- Windows 10 with [WSL -enabled.](https://support.nesi.org.nz/hc/en-gb/articles/360001075575) + - Be a [member of an active + project.](https://support.nesi.org.nz/hc/en-gb/articles/360000693896-Applying-to-join-a-NeSI-project) + - Windows 10 with [WSL + enabled.](https://support.nesi.org.nz/hc/en-gb/articles/360001075575) Currently the native Windows command prompt (even with WSL enabled) does not support certain features, until this is fixed we recommend using the Ubuntu LTS Terminal. 1. Open the Microsoft store, search for 'Ubuntu', find and install -'Ubuntu 18.04 LTS' or  'Ubuntu 20.04 LTS' - -![ubuntu5.png](../../assets/images/Ubuntu_LTS_terminal_Windows_10.png)![ubuntu6.png](../../assets/images/Ubuntu_LTS_terminal_Windows_11.png) - - + 'Ubuntu 18.04 LTS' or  'Ubuntu 20.04 LTS'  + + ![ubuntu5.png](../../assets/images/Ubuntu_LTS_terminal_Windows_10.png)![ubuntu6.png](../../assets/images/Ubuntu_LTS_terminal_Windows_11.png) + + 2. Close the “Add your Microsoft account.. dialogue box as you do not -need an account for the installation.You may have to click “Install” -for a second time (If the above dialogue box reappears, close as -before and download/install will begin. - -![ubuntu3.png](../../assets/images/Ubuntu_LTS_terminal_Windows_12.png) - -![ubuntu4.png](../../assets/images/Ubuntu_LTS_terminal_Windows_13.png) - + need an account for the installation.You may have to click “Install” + for a second time (If the above dialogue box reappears, close as + before and download/install will begin. + + ![ubuntu3.png](../../assets/images/Ubuntu_LTS_terminal_Windows_12.png) +   +  ![ubuntu4.png](../../assets/images/Ubuntu_LTS_terminal_Windows_13.png) + 3. Launch “Ubuntu 18.04 LTS” from start menu and wait for the first -time installation to complete. + time installation to complete. 4. As you are running Ubuntu on Windows for the first time, it will -require to be configured. Once the installation was complete, you -will be prompted to “Enter new UNIX username” and press -<Enter>. This username can be anything you want. - -![ubuntu1.png](../../assets/images/Ubuntu_LTS_terminal_Windows_14.png) - + require to be configured. Once the installation was complete, you + will be prompted to “Enter new UNIX username” and press + <Enter>. This username can be anything you want. + + ![ubuntu1.png](../../assets/images/Ubuntu_LTS_terminal_Windows_14.png) + 5. Now, type in a new password for the username you picked and press -<Enter>. (Again this password is anything you want). Then -retype the password to confirm and press <Enter> - -![ubuntu2.png](../../assets/images/Ubuntu_LTS_terminal_Windows_15.png) + <Enter>. (Again this password is anything you want). Then + retype the password to confirm and press <Enter> + + ![ubuntu2.png](../../assets/images/Ubuntu_LTS_terminal_Windows_15.png) 6. To create a symbolic link to your Windows filesystems in your home -directory run the following command replacing c with the name of -your Windows filesystems found in /mnt/. + directory run the following command replacing c with the name of + your Windows filesystems found in /mnt/.  -``` sl -ln -s /mnt/c/Users/YourWindowsUsername/ WinFS -``` + ``` sl + ln -s /mnt/c/Users/YourWindowsUsername/ WinFS + ``` !!! prerequisite What Next? -- Set up your [SSH config -file](https://support.nesi.org.nz/hc/en-gb/articles/360000625535). \ No newline at end of file + - Set up your [SSH config + file](https://support.nesi.org.nz/hc/en-gb/articles/360000625535). \ No newline at end of file diff --git a/docs/Scientific_Computing/Terminal_Setup/WinSCP-PuTTY_Setup_Windows.md b/docs/Scientific_Computing/Terminal_Setup/WinSCP-PuTTY_Setup_Windows.md index de4889729..077a13722 100644 --- a/docs/Scientific_Computing/Terminal_Setup/WinSCP-PuTTY_Setup_Windows.md +++ b/docs/Scientific_Computing/Terminal_Setup/WinSCP-PuTTY_Setup_Windows.md @@ -20,15 +20,15 @@ zendesk_section_id: 360000189696 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Requirements -- Have an [active account and -project.](https://support.nesi.org.nz/hc/en-gb/sections/360000196195-Accounts-Projects) -- Set up your [NeSI account -password.](https://support.nesi.org.nz/hc/en-gb/articles/360000335995) -- Set up Second [Factor -Authentication.](https://support.nesi.org.nz/hc/en-gb/articles/360000203075) -- Be using the Windows operating system. - + - Have an [active account and + project.](https://support.nesi.org.nz/hc/en-gb/sections/360000196195-Accounts-Projects) + - Set up your [NeSI account + password.](https://support.nesi.org.nz/hc/en-gb/articles/360000335995) + - Set up Second [Factor + Authentication.](https://support.nesi.org.nz/hc/en-gb/articles/360000203075) + - Be using the Windows operating system. +  WinSCP is an SCP client for windows implementing the SSH protocol from PuTTY. @@ -42,17 +42,17 @@ Upon startup: 1. Add a *New Site* and set: - Enter in *Host Name: *login.mahuika.nesi.org.nz or -login.maui.nesi.org.nz + login.maui.nesi.org.nz - Enter your NeSI account username into *User name:* (Password -optional) + optional) !!! prerequisite Tip -For "file protocol" (the topmost drop-down menu), either SCP or SFTP -is acceptable. If you are trying to move many small files or have a -slow or flaky Internet connection, you may find that SFTP performs -better than SCP. Feel free to try both and see which works best for -you. - + For "file protocol" (the topmost drop-down menu), either SCP or SFTP + is acceptable. If you are trying to move many small files or have a + slow or flaky Internet connection, you may find that SFTP performs + better than SCP. Feel free to try both and see which works best for + you. + ![WinSCP2.png](../../assets/images/WinSCP-PuTTY_Setup_Windows_0.png) 5\. Open Advanced Settings. @@ -84,7 +84,7 @@ password and pass it to PuTTY* ![WinSCP4.png](../../assets/images/WinSCP-PuTTY_Setup_Windows_3.png) - +  ## Setup for Xming (Optional) @@ -102,8 +102,8 @@ PuTTY/Terminal client path. 3\. Restart your session. !!! prerequisite Important -In order for X11 forwarding to work you must have an Xming server -running before connecting to the HPC. + In order for X11 forwarding to work you must have an Xming server + running before connecting to the HPC. ## Usage @@ -136,7 +136,7 @@ current directory. By default, WinSCP will create multiple tunnels for file transfers. Occasionally this can lead to an excessive number of prompts. Limiting -number of tunnels will reduce the number of times you are prompted. +number of tunnels will reduce the number of times you are prompted.  1\. Open settings @@ -146,15 +146,15 @@ number of tunnels will reduce the number of times you are prompted. transfers at the same time' to '1' and untick 'Use multiple connections for a single transfer'. -![winscp\_Settings2.png](../../assets/images/WinSCP-PuTTY_Setup_Windows_11.png) +![winscp\_Settings2.png](../../assets/images/WinSCP-PuTTY_Setup_Windows_11.png)  !!! prerequisite Important -As WinSCP uses multiple tunnels for file transfer you will be required -to authenticate again on your first file operation of the session. The -second prompt for your second-factor token can be skipped, just as -with login authentication. + As WinSCP uses multiple tunnels for file transfer you will be required + to authenticate again on your first file operation of the session. The + second prompt for your second-factor token can be skipped, just as + with login authentication. !!! prerequisite What Next? -- [Moving files to/from a -cluster.](https://support.nesi.org.nz/hc/en-gb/articles/360000578455) -- Setting up -an [X-Server](https://support.nesi.org.nz/hc/en-gb/articles/360001075975) -(optional). \ No newline at end of file + - [Moving files to/from a + cluster.](https://support.nesi.org.nz/hc/en-gb/articles/360000578455) + - Setting up + an [X-Server](https://support.nesi.org.nz/hc/en-gb/articles/360001075975) + (optional). \ No newline at end of file diff --git a/docs/Scientific_Computing/Terminal_Setup/Windows_Subsystem_for_Linux_WSL.md b/docs/Scientific_Computing/Terminal_Setup/Windows_Subsystem_for_Linux_WSL.md index 0b324eb96..ad38a77cc 100644 --- a/docs/Scientific_Computing/Terminal_Setup/Windows_Subsystem_for_Linux_WSL.md +++ b/docs/Scientific_Computing/Terminal_Setup/Windows_Subsystem_for_Linux_WSL.md @@ -20,24 +20,24 @@ zendesk_section_id: 360000189696 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Requirements -- Windows 10. + - Windows 10. Windows subsystem for Linux is a feature that allows you to utilise some linux commands and command line tools. WSL is enabled by default on later versions of Windows 10. !!! prerequisite Tip -You can test whether WSL is installed by opening 'Windows PowerShell' -and typing `wsl`. + You can test whether WSL is installed by opening 'Windows PowerShell' + and typing `wsl`. ## Enabling WSL -1. Open 'Turn Windows features on or off' -![WSL1.png](../../assets/images/Windows_Subsystem_for_Linux_WSL.png) -2. Scroll down and tick the 'Windows Subsystem for Linux' option. -![WSL2.png](../../assets/images/Windows_Subsystem_for_Linux_WSL_0.png) -Click OK +1. Open 'Turn Windows features on or off' + ![WSL1.png](../../assets/images/Windows_Subsystem_for_Linux_WSL.png) +2. Scroll down and tick the 'Windows Subsystem for Linux' option. + ![WSL2.png](../../assets/images/Windows_Subsystem_for_Linux_WSL_0.png) + Click OK 3. Wait for the installation to finish then restart your computer. !!! prerequisite What Next? -- Set up your [SSH config -file](https://support.nesi.org.nz/hc/en-gb/articles/360000625535). \ No newline at end of file + - Set up your [SSH config + file](https://support.nesi.org.nz/hc/en-gb/articles/360000625535). \ No newline at end of file diff --git a/docs/Scientific_Computing/Terminal_Setup/X11_on_NeSI.md b/docs/Scientific_Computing/Terminal_Setup/X11_on_NeSI.md index 8a27dfcef..77384cf99 100644 --- a/docs/Scientific_Computing/Terminal_Setup/X11_on_NeSI.md +++ b/docs/Scientific_Computing/Terminal_Setup/X11_on_NeSI.md @@ -20,9 +20,9 @@ zendesk_section_id: 360000189696 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite Requirements -- Have working -[terminal](https://support.nesi.org.nz/hc/en-gb/sections/360000189696) -set up. + - Have working + [terminal](https://support.nesi.org.nz/hc/en-gb/sections/360000189696) + set up. X-11 is a protocol for rendering graphical user interfaces (GUIs) that can be sent along an SSH tunnel. If you plan on using a GUI on a NeSI @@ -44,17 +44,17 @@ Download links for X-servers can be found below. Make sure you have launched the server and it is running in the background, look for this ![mceclip0.png](../../assets/images/X11_on_NeSI.png) symbol in your -taskbar +taskbar  !!! prerequisite Note -MobaXterm has a build in X server, no setup required. By default the -server is started alongside MobaXterm. You can check it's status in -the top left hand corner -(![xon.png](../../assets/images/X11_on_NeSI_0.png)=on, ![off.png](../../assets/images/X11_on_NeSI_1.png)=off). + MobaXterm has a build in X server, no setup required. By default the + server is started alongside MobaXterm. You can check it's status in + the top left hand corner + (![xon.png](../../assets/images/X11_on_NeSI_0.png)=on, ![off.png](../../assets/images/X11_on_NeSI_1.png)=off).  ## X-Forwarding Finally your ssh tunnel must be set up to 'forward' along X-11 -connections. +connections.  ### OpenSSH (terminal) @@ -70,7 +70,7 @@ ssh -Y login.nesi.org.nz ### MobaXterm -Under 'session settings' for your connection make sure the X-11 + Under 'session settings' for your connection make sure the X-11 forwarding box is checked. ![x11moba.png](../../assets/images/X11_on_NeSI_2.png) @@ -114,7 +114,7 @@ If your application requires X11 in order to run, but does not need to be interactive you can use X11 Virtual Frame Buffer. This may be required to in order to run visual applications on the compute nodes. Prepending any command with `xfvb-run` will provide a dummy X11 server -for the application to render to. +for the application to render to. e.g. ``` sl diff --git a/docs/Scientific_Computing/The_NeSI_High_Performance_Computers/Available_GPUs_on_NeSI.md b/docs/Scientific_Computing/The_NeSI_High_Performance_Computers/Available_GPUs_on_NeSI.md index 5df8fd267..241f943d4 100644 --- a/docs/Scientific_Computing/The_NeSI_High_Performance_Computers/Available_GPUs_on_NeSI.md +++ b/docs/Scientific_Computing/The_NeSI_High_Performance_Computers/Available_GPUs_on_NeSI.md @@ -24,7 +24,7 @@ compute-intensive research and support more analysis at scale. Depending on the type of GPU, you can access them in different ways, such as via batch scheduler (Slurm), interactively (using [Jupyter on NeSI](https://support.nesi.org.nz/hc/en-gb/articles/360001555615)), or -Virtual Machines (VMs). +Virtual Machines (VMs).  The table below outlines the different types of GPUs, who can access them and how, and whether they are currently available or on the future @@ -34,7 +34,7 @@ If you have any questions about GPUs on NeSI or the status of anything listed in the table, [contact Support](https://support.nesi.org.nz/hc/en-gb/requests/new). - +  | GPGPU | Purpose | Location | Access mode | Who can access | Status | |----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------|----------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------------------------|----------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------| diff --git a/docs/Scientific_Computing/The_NeSI_High_Performance_Computers/Mahuika.md b/docs/Scientific_Computing/The_NeSI_High_Performance_Computers/Mahuika.md index 602fa3e1f..f30961961 100644 --- a/docs/Scientific_Computing/The_NeSI_High_Performance_Computers/Mahuika.md +++ b/docs/Scientific_Computing/The_NeSI_High_Performance_Computers/Mahuika.md @@ -41,17 +41,17 @@ ssh to these nodes after logging onto the NeSI lander node. ## Notes 1. The Cray Programming Environment on Mahuika, differs from that on -Māui. + Māui. 2. The `/home, /nesi/project`, and `/nesi/nobackup` -[filesystems](https://support.nesi.org.nz/hc/en-gb/articles/360000177256) -are mounted on Mahuika. + [filesystems](https://support.nesi.org.nz/hc/en-gb/articles/360000177256) + are mounted on Mahuika. 3. Read about how to compile and link code on Mahuika in section -entitled: [Compiling software on -Mahuika.](https://support.nesi.org.nz/hc/en-gb/articles/360000329015) + entitled: [Compiling software on + Mahuika.](https://support.nesi.org.nz/hc/en-gb/articles/360000329015) 4. An extension to Mahuika with additional, upgraded resources is also -available. see [Milan Compute -Nodes](https://support.nesi.org.nz/hc/en-gb/articles/6367209795471-Milan-Compute-Nodes) -for details on access + available. see [Milan Compute + Nodes](https://support.nesi.org.nz/hc/en-gb/articles/6367209795471-Milan-Compute-Nodes) + for details on access ## Mahuika HPC Cluster (Cray CS400) @@ -149,7 +149,7 @@ Rocky 8.5 on Milan

- +  ##  Storage (IBM ESS) @@ -163,5 +163,6 @@ Scratch and persistent storage are accessible from Mahuika, as well as from Māui and the ancillary nodes. Offline storage will in due course be accessible indirectly, via a dedicated service. +  - +  \ No newline at end of file diff --git a/docs/Scientific_Computing/The_NeSI_High_Performance_Computers/Maui.md b/docs/Scientific_Computing/The_NeSI_High_Performance_Computers/Maui.md index ad079b162..7d2be0f59 100644 --- a/docs/Scientific_Computing/The_NeSI_High_Performance_Computers/Maui.md +++ b/docs/Scientific_Computing/The_NeSI_High_Performance_Computers/Maui.md @@ -36,13 +36,13 @@ example pre- and post-processing work, and to provide virtual lab services, we offer a small number [Māui ancillary nodes](https://support.nesi.org.nz/hc/articles/360000203776). !!! prerequisite Tips -The computing capacity of the Māui ancillary nodes is limited. If you -think you will need large amounts of computing power for small jobs in -addition to large jobs that can run on Māui, please [contact -us](https://support.nesi.org.nz/hc/requests/new) about getting an -allocation on -[Mahuika](https://support.nesi.org.nz/hc/en-gb/articles/360000163575), -our high-throughput computing cluster. + The computing capacity of the Māui ancillary nodes is limited. If you + think you will need large amounts of computing power for small jobs in + addition to large jobs that can run on Māui, please [contact + us](https://support.nesi.org.nz/hc/requests/new) about getting an + allocation on + [Mahuika](https://support.nesi.org.nz/hc/en-gb/articles/360000163575), + our high-throughput computing cluster. The login or build nodes maui01 and maui02 provide access to the full Cray Programming Environment (e.g. editors, compilers, linkers, debug @@ -52,15 +52,15 @@ lander node. Jobs can be submitted to the HPC from these nodes. ## Important Notes 1. The Cray Programming Environment on the XC50 (supercomputer) differs -from that on Mahuika and the Māui Ancillary nodes. + from that on Mahuika and the Māui Ancillary nodes. 2. The `/home, /nesi/project`, and `/nesi/nobackup` [file -systems](https://support.nesi.org.nz/hc/articles/360000177256) are -mounted on Māui. + systems](https://support.nesi.org.nz/hc/articles/360000177256) are + mounted on Māui. 3. The I/O subsystem on the XC50 can provide high bandwidth to disk -(large amounts of data), but not many separate reading or writing -operations.** **If your code performs a lot of disk read or write -operations, it should be run on either the [Māui ancillary -nodes](https://support.nesi.org.nz/hc/en-gb/articles/360000203776) or [Mahuika](https://support.nesi.org.nz/hc/en-gb/articles/360000163575). + (large amounts of data), but not many separate reading or writing + operations.** **If your code performs a lot of disk read or write + operations, it should be run on either the [Māui ancillary + nodes](https://support.nesi.org.nz/hc/en-gb/articles/360000203776) or [Mahuika](https://support.nesi.org.nz/hc/en-gb/articles/360000163575). All Māui resources are indicated below, and the the Māui Ancillary Node resources @@ -138,5 +138,6 @@ SUSE Linux Enterprise Server 15 SP3
| **Persistent storage** (accessible from all Māui, Mahuika, and Ancillary nodes). | 1,765 TB (IBM Spectrum Scale, version 5.0) Shared Storage. Total I/O bandwidth to disks is 65 GB/s (i.e. the /home and /nesi/project filesystems) | | **Offline storage** (accessible from all Māui, Mahuika, and Ancillary nodes). | Of the order of 100 PB (compressed) | +  - +  \ No newline at end of file diff --git a/docs/Scientific_Computing/The_NeSI_High_Performance_Computers/Maui_Ancillary.md b/docs/Scientific_Computing/The_NeSI_High_Performance_Computers/Maui_Ancillary.md index b21ace96c..4a6ce3433 100644 --- a/docs/Scientific_Computing/The_NeSI_High_Performance_Computers/Maui_Ancillary.md +++ b/docs/Scientific_Computing/The_NeSI_High_Performance_Computers/Maui_Ancillary.md @@ -26,17 +26,17 @@ The Māui Ancillary Nodes provide access to a Virtualised environment that supports: 1. Pre- and post-processing of data for jobs running on the -[Māui](https://support.nesi.org.nz/hc/articles/360000163695) -Supercomputer or -[Mahuika](https://support.nesi.org.nz/hc/articles/360000163575) HPC -Cluster. Typically, as serial processes on a Slurm partition running -on a set of Ancillary node VMs or baremetal servers. + [Māui](https://support.nesi.org.nz/hc/articles/360000163695) + Supercomputer or + [Mahuika](https://support.nesi.org.nz/hc/articles/360000163575) HPC + Cluster. Typically, as serial processes on a Slurm partition running + on a set of Ancillary node VMs or baremetal servers. 2. Virtual laboratories that provide interactive access to data stored -on the Māui (and Mahuika) storage together with domain analysis -toolsets (e.g. Seismic, Genomics, Climate, etc.). To access the -Virtual Laboratory nodes, users will first logon to the NeSI Lander -node, then ssh to the relevant Virtual Laboratory. Users may submit -jobs to Slurm partitions from Virtual Laboratory nodes. + on the Māui (and Mahuika) storage together with domain analysis + toolsets (e.g. Seismic, Genomics, Climate, etc.). To access the + Virtual Laboratory nodes, users will first logon to the NeSI Lander + node, then ssh to the relevant Virtual Laboratory. Users may submit + jobs to Slurm partitions from Virtual Laboratory nodes. 3. Remote visualisation of data resident on the filesystems. 4. GPGPU computing. @@ -46,10 +46,10 @@ and any (multi-cluster) Slurm partitions on the Māui or Mahuika systems. ## Notes: 1. The `/home, /nesi/project`, and `/nesi/nobackup` -[filesystems](https://support.nesi.org.nz/hc/articles/360000177256) -are mounted on the Māui Ancillary Nodes. + [filesystems](https://support.nesi.org.nz/hc/articles/360000177256) + are mounted on the Māui Ancillary Nodes. 2. The Māui Ancillary nodes have Skylake processors, while the Mahuika -nodes use Broadwell processors. + nodes use Broadwell processors. ## Ancillary Node Specifications @@ -66,7 +66,7 @@ nodes use Broadwell processors. | **Workload Manager** | Slurm (Multi-Cluster) | | **OpenStack** | The Cray CS500 Ancillary nodes will normally be presented to users as Virtual Machines, provisioned from the physical hardware as required. | - +  The Māui\_Ancil nodes have different working environment than the Māui (login) nodes. Therefore a CS500 login node is provided, to create and @@ -83,12 +83,12 @@ could add the following section to `~/.ssh/config` (extending the setup](https://support.nesi.org.nz/hc/en-gb/articles/360000625535-Recommended-Terminal-Setup)) ``` sl -Host w-mauivlab01 -User -Hostname w-mauivlab01.maui.nesi.org.nz -ProxyCommand ssh -W %h:%p maui -ForwardX11 yes -ForwardX11Trusted yes -ServerAliveInterval 300 -ServerAliveCountMax 2 +Host w-mauivlab01 + User + Hostname w-mauivlab01.maui.nesi.org.nz + ProxyCommand ssh -W %h:%p maui + ForwardX11 yes + ForwardX11Trusted yes + ServerAliveInterval 300 + ServerAliveCountMax 2 ``` \ No newline at end of file diff --git a/docs/Scientific_Computing/The_NeSI_High_Performance_Computers/Overview.md b/docs/Scientific_Computing/The_NeSI_High_Performance_Computers/Overview.md index d10126106..a0b1c4a79 100644 --- a/docs/Scientific_Computing/The_NeSI_High_Performance_Computers/Overview.md +++ b/docs/Scientific_Computing/The_NeSI_High_Performance_Computers/Overview.md @@ -29,17 +29,17 @@ data-centric and data intensive research computing environment built on leading edge high performance computing (HPC) systems. - Māui, which in Maori mythology is credited with catching a giant -fish using a fishhook taken from his grandmother's jaw-bone; the -giant fish would become the North Island of New Zealand, provides a -Capability (i.e. Supercomputer) HPC resource on which researchers -can run simulations and calculations that require large numbers -(e.g. thousands) of processing cores working in a tightly-coupled, -parallel fashion. + fish using a fishhook taken from his grandmother's jaw-bone; the + giant fish would become the North Island of New Zealand, provides a + Capability (i.e. Supercomputer) HPC resource on which researchers + can run simulations and calculations that require large numbers + (e.g. thousands) of processing cores working in a tightly-coupled, + parallel fashion. - Mahuika, which in Maori mythology, is a fire deity, from whom Māui -obtained the secret of making fire, provides a Capacity (i.e. -Cluster) HPC resource to allow researchers to run many small (e.g. -from 1 core to a few hundred cores) compute jobs simultaneously -(aka  High Throughput Computing). + obtained the secret of making fire, provides a Capacity (i.e. + Cluster) HPC resource to allow researchers to run many small (e.g. + from 1 core to a few hundred cores) compute jobs simultaneously + (aka  High Throughput Computing). Māui and Mahuika share the same high performance filesystems, accordingly, data created on either system are visible on the other @@ -52,16 +52,16 @@ on [Māui](https://support.nesi.org.nz/hc/articles/360000203776)  provide the research community with: - Leading edge HPCs (both Capacity and Capability) via a single point -of access; + of access; - New user facing services that can act on the data held within the -NeSI HPC infrastructure, including: -- Pre- and post-processing systems to support workflows; -- Virtual Laboratories that provide interactive access to science -domain specific tools \[Coming soon\]; -- Remote visualisation services \[Coming soon\]; -- Advanced data analytics tools, and -- The ability to seamlessly move data between high performance -disk storage and offline tape. + NeSI HPC infrastructure, including: + - Pre- and post-processing systems to support workflows; + - Virtual Laboratories that provide interactive access to science + domain specific tools \[Coming soon\]; + - Remote visualisation services \[Coming soon\]; + - Advanced data analytics tools, and + - The ability to seamlessly move data between high performance + disk storage and offline tape. - Offsite replication of critical data (both online and offline). These systems are diff --git a/docs/Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC.md b/docs/Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC.md index 3e6b8fef0..8182c104d 100644 --- a/docs/Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC.md +++ b/docs/Scientific_Computing/Training/Introduction_to_computing_on_the_NeSI_HPC.md @@ -20,10 +20,10 @@ zendesk_section_id: 5203123172239 [//]: <> (REMOVE ME IF PAGE VALIDATED) - [Introduction to computing on the NeSI HPC (Part -1)](https://www.youtube.com/watch?v=RrFAb8Atsc0&list=PLvbRzoDQPkuFsIzAWaIiYgs-kConq-Hjw) + 1)](https://www.youtube.com/watch?v=RrFAb8Atsc0&list=PLvbRzoDQPkuFsIzAWaIiYgs-kConq-Hjw)  - [Introduction to computing on the NeSI HPC platform (Part -2)](https://www.youtube.com/watch?v=8TNcFZvXSao&list=PLvbRzoDQPkuFsIzAWaIiYgs-kConq-Hjw&index=2) + 2)](https://www.youtube.com/watch?v=8TNcFZvXSao&list=PLvbRzoDQPkuFsIzAWaIiYgs-kConq-Hjw&index=2)  - [Introduction to computing on the NeSI HPC (Part -3)](https://www.youtube.com/watch?v=0Vw4b7yY8o8&list=PLvbRzoDQPkuFsIzAWaIiYgs-kConq-Hjw&index=3) + 3)](https://www.youtube.com/watch?v=0Vw4b7yY8o8&list=PLvbRzoDQPkuFsIzAWaIiYgs-kConq-Hjw&index=3) - [Introduction to computing on the NeSI HPC (Part -4)](https://www.youtube.com/watch?v=kXf6RkRQ6tU&list=PLvbRzoDQPkuFsIzAWaIiYgs-kConq-Hjw&index=4) \ No newline at end of file + 4)](https://www.youtube.com/watch?v=kXf6RkRQ6tU&list=PLvbRzoDQPkuFsIzAWaIiYgs-kConq-Hjw&index=4) \ No newline at end of file diff --git a/docs/Scientific_Computing/Training/Webinars.md b/docs/Scientific_Computing/Training/Webinars.md index 012ab2b6a..d640c88ad 100644 --- a/docs/Scientific_Computing/Training/Webinars.md +++ b/docs/Scientific_Computing/Training/Webinars.md @@ -22,74 +22,75 @@ zendesk_section_id: 5203123172239 Our webinar playlist covers the following topics: - [Troubleshooting on -NeSI](https://www.youtube.com/watch?v=H_UXkj9Nmoc&t=7s) + NeSI](https://www.youtube.com/watch?v=H_UXkj9Nmoc&t=7s) - [Make the most of your HPC -allocation](https://www.youtube.com/watch?v=VVvEX3Q3kq8&t=2s) -- + allocation](https://www.youtube.com/watch?v=VVvEX3Q3kq8&t=2s) +- - [Genomics workflows and how they can streamline your -research](https://www.youtube.com/watch?v=Pb1M8Yyik4Y&t=3s) + research](https://www.youtube.com/watch?v=Pb1M8Yyik4Y&t=3s) - [Using Rmarkdown to create clear, reproducible -analyses](https://www.youtube.com/watch?v=uPwvVKhMfdA&t=1627s) + analyses](https://www.youtube.com/watch?v=uPwvVKhMfdA&t=1627s) - [Git Basics for -researchers](https://www.youtube.com/watch?v=l0GD7ZxBhJ4&t=316s) + researchers](https://www.youtube.com/watch?v=l0GD7ZxBhJ4&t=316s) - [Scripting at the speed of compiled code: -Vectorisation](https://www.youtube.com/watch?v=yDYXOntAlIk) + Vectorisation](https://www.youtube.com/watch?v=yDYXOntAlIk) - [Job scaling and running tests on -NeSI](https://www.youtube.com/watch?v=CqATGcNbipo&t=1s) + NeSI](https://www.youtube.com/watch?v=CqATGcNbipo&t=1s) - [Sharing Data with Groups Using -Globus](https://www.youtube.com/watch?v=SmkWHjFDfQY&t=1808s) + Globus](https://www.youtube.com/watch?v=SmkWHjFDfQY&t=1808s) - [High performance -modelling](https://www.youtube.com/watch?v=1nkiM59QI7w) + modelling](https://www.youtube.com/watch?v=1nkiM59QI7w) - [Jupyter Tips & Tricks From interactive experiments to batch jobs -and -more](https://www.youtube.com/watch?v=0Y1fMz2eZpc&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=2&t=264s) + and + more](https://www.youtube.com/watch?v=0Y1fMz2eZpc&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=2&t=264s) - [Tips & tricks for hosting a successful online -event](https://www.youtube.com/watch?v=XTeCHUZ2H_w&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=7&t=5s) + event](https://www.youtube.com/watch?v=XTeCHUZ2H_w&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=7&t=5s) - [Who needs GPUs? Tips for determining if your code is a -fit](https://www.youtube.com/watch?v=MlxvmzFQeUA&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=8&t=27s) + fit](https://www.youtube.com/watch?v=MlxvmzFQeUA&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=8&t=27s) - [NeSI and REANNZ infrastructure: tools and tips for research -success](https://www.youtube.com/watch?v=ScMm8GAsF0c&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=9&t=446s) + success](https://www.youtube.com/watch?v=ScMm8GAsF0c&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=9&t=446s) - [Should I use GPUs for my -research?](https://www.youtube.com/watch?v=PijLW7bpkUM&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=11&t=379s) + research?](https://www.youtube.com/watch?v=PijLW7bpkUM&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=11&t=379s) - [Globus for IT Professionals -webinar](https://www.youtube.com/watch?v=makHR0uf_y0&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=12&t=22s) + webinar](https://www.youtube.com/watch?v=makHR0uf_y0&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=12&t=22s) - [Webinar: Jupyter on NeSI - April -2021](https://www.youtube.com/watch?v=Hb-JeQ8FvdE&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=13&t=4s) + 2021](https://www.youtube.com/watch?v=Hb-JeQ8FvdE&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=13&t=4s) - [4 Tips for Getting Started on -NeSI](https://www.youtube.com/watch?v=NFybV9CBeh0&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=14&t=80s) + NeSI](https://www.youtube.com/watch?v=NFybV9CBeh0&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=14&t=80s) - [RMarkdown for Researchers - Weave together narrative text and -code](https://www.youtube.com/watch?v=MgoxmQNi7zU&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=15&t=28s) + code](https://www.youtube.com/watch?v=MgoxmQNi7zU&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=15&t=28s) - [From Cells to Clouds - Fluid Mechanics at -Scale](https://www.youtube.com/watch?v=j_xO8wAdrjk&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=16&t=9s) + Scale](https://www.youtube.com/watch?v=j_xO8wAdrjk&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=16&t=9s) - [Modelling gene regulatory networks via high performance -computing](https://www.youtube.com/watch?v=ydeeOlGOC4U&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=17&t=91s) + computing](https://www.youtube.com/watch?v=ydeeOlGOC4U&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=17&t=91s) - [Introducing Docker container technologies to -researchers](https://www.youtube.com/watch?v=EUw47Dfhs8w&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=18&t=9732s) + researchers](https://www.youtube.com/watch?v=EUw47Dfhs8w&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=18&t=9732s) - [Researcher Reflections Panel - tips for navigating your -supercomputing -journey](https://www.youtube.com/watch?v=kp4OfRSUSl4&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=19&t=28s) + supercomputing + journey](https://www.youtube.com/watch?v=kp4OfRSUSl4&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=19&t=28s) - [Reproducible research workflows with -containers](https://www.youtube.com/watch?v=SzYx2t67w84&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=20) + containers](https://www.youtube.com/watch?v=SzYx2t67w84&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=20) - [Git Basics for -researchers](https://www.youtube.com/watch?v=l0GD7ZxBhJ4&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=21&t=316s) + researchers](https://www.youtube.com/watch?v=l0GD7ZxBhJ4&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=21&t=316s) - [Using Rmarkdown to create clear, reproducible -analyses](https://www.youtube.com/watch?v=uPwvVKhMfdA&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=22&t=1627s) + analyses](https://www.youtube.com/watch?v=uPwvVKhMfdA&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=22&t=1627s) - [Genomics workflows and how they can streamline your -research](https://www.youtube.com/watch?v=Pb1M8Yyik4Y&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=23&t=3s) + research](https://www.youtube.com/watch?v=Pb1M8Yyik4Y&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=23&t=3s) - [Make the most of your HPC -allocation](https://www.youtube.com/watch?v=VVvEX3Q3kq8&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=24&t=2s) + allocation](https://www.youtube.com/watch?v=VVvEX3Q3kq8&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=24&t=2s) - [Scripting at the speed of compiled code - -Vectorisation](https://www.youtube.com/watch?v=yDYXOntAlIk&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=26) + Vectorisation](https://www.youtube.com/watch?v=yDYXOntAlIk&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=26) - [Getting Started on -NeSI](https://www.youtube.com/watch?v=nLfgnQPLgWk&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=27&t=875s) + NeSI](https://www.youtube.com/watch?v=nLfgnQPLgWk&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=27&t=875s) - [Building a Convolution Neural Network to Classify Underwater Sounds -(NeSI/ -NIWA)](https://www.youtube.com/watch?v=ttEW6QvgAHM&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=28&t=35s) + (NeSI/ + NIWA)](https://www.youtube.com/watch?v=ttEW6QvgAHM&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=28&t=35s) - [NeSI's National Data Transfer Platform — Sharing Data with Groups -Using -Globus](https://www.youtube.com/watch?v=SmkWHjFDfQY&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=29&t=1808s) + Using + Globus](https://www.youtube.com/watch?v=SmkWHjFDfQY&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=29&t=1808s) - [Job scaling and running tests on -NeSI](https://www.youtube.com/watch?v=CqATGcNbipo&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=30&t=3s) + NeSI](https://www.youtube.com/watch?v=CqATGcNbipo&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=30&t=3s) - [Python Profiling on -NeSI](https://www.youtube.com/watch?v=b1cpCeksWXw&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=32&t=93s) + NeSI](https://www.youtube.com/watch?v=b1cpCeksWXw&list=PLvbRzoDQPkuG_YGNgFnc0RaGW7wazDzIF&index=32&t=93s) +  \ No newline at end of file diff --git a/docs/Scientific_Computing/Training/Workshops.md b/docs/Scientific_Computing/Training/Workshops.md index 8fa4b2c2f..c94453cf3 100644 --- a/docs/Scientific_Computing/Training/Workshops.md +++ b/docs/Scientific_Computing/Training/Workshops.md @@ -26,16 +26,16 @@ more about upcoming training events.** Aotearoa-NeSI Training Team:** - **RNA-seq -workshop**[ \[Link\]](https://github.com/GenomicsAotearoa/RNA-seq-workshop) -[\[Workshop -Link\]](https://github.com/gregomics/RNAseqWorkshop2018/) + workshop**[ \[Link\]](https://github.com/GenomicsAotearoa/RNA-seq-workshop) + [\[Workshop + Link\]](https://github.com/gregomics/RNAseqWorkshop2018/) - **Genomics Data Carpentry (GDC) -Workshops** [\[Link\]](https://datacarpentry.org/genomics-workshop/) + Workshops** [\[Link\]](https://datacarpentry.org/genomics-workshop/) - **Genomics workflows: How CWL can streamline your research:** The -link to the presentation -is [here](https://www.nesi.org.nz/news/2020/03/webinar-recording-available%E2%80%93-genomics-workflows-how-cwl-can-streamline-your-research) + link to the presentation + is [here](https://www.nesi.org.nz/news/2020/03/webinar-recording-available%E2%80%93-genomics-workflows-how-cwl-can-streamline-your-research) - **Annual Metagenomics Summer School:** -[\[Link\]](https://github.com/GenomicsAotearoa/metagenomics_summer_school) + [\[Link\]](https://github.com/GenomicsAotearoa/metagenomics_summer_school) - **Genomics Data Carpentry Workshops** - **Genotyping by Sequencing:** [\[Workshop -Link\]](https://otagomohio.github.io/2019-06-11_GBS_EE/) \ No newline at end of file + Link\]](https://otagomohio.github.io/2019-06-11_GBS_EE/) \ No newline at end of file diff --git a/docs/Storage/Data_Recovery/File_Recovery.md b/docs/Storage/Data_Recovery/File_Recovery.md index a118b8916..659241ef6 100644 --- a/docs/Storage/Data_Recovery/File_Recovery.md +++ b/docs/Storage/Data_Recovery/File_Recovery.md @@ -22,8 +22,8 @@ zendesk_section_id: 360000042215 ## Snapshots Snapshots are read only copies of the file system taken every day at -12:15, and retained for seven days. - +12:15, and retained for seven days. + Files from you project directory can be found in `/nesi/project/.snapshots/` followed by the weekday (capitalised) and project code, e.g; @@ -32,15 +32,15 @@ project code, e.g; /nesi/project/.snapshots/Sunday/nesi99999/ ``` -And for home directory; + And for home directory; ``` sl /home/username/.snapshots/Sunday/ ``` !!! prerequisite Warning -Files in `/nesi/nobackup/` are not snapshotted. - + Files in `/nesi/nobackup/` are not snapshotted. +  Recovering a file or a directory from the snapshot is as simple as copying it over, e.g. @@ -49,5 +49,5 @@ copying it over, e.g. cp /nesi/project/.snapshots/Sunday/nesi99999/file.txt /nesi/project/nesi99999/file.txt ``` !!! prerequisite Tip -For copying directories use the flag -ir or just -r if you don't want -to be prompted before overwriting. \ No newline at end of file + For copying directories use the flag -ir or just -r if you don't want + to be prompted before overwriting. \ No newline at end of file diff --git a/docs/Storage/Data_Transfer_Services/Data_Transfer_using_Globus_V5.md b/docs/Storage/Data_Transfer_Services/Data_Transfer_using_Globus_V5.md index 587cdc1a7..2fc5479ad 100644 --- a/docs/Storage/Data_Transfer_Services/Data_Transfer_using_Globus_V5.md +++ b/docs/Storage/Data_Transfer_Services/Data_Transfer_using_Globus_V5.md @@ -29,22 +29,22 @@ data transfer rates are achievable. This service allows data to be accessible to any person who has a Globus account. The newest implementation (v5) provides extra features and some key differences from the previous setup that you can find -[here](https://docs.globus.org/globus-connect-server/). +[here](https://docs.globus.org/globus-connect-server/).  To use Globus on NeSI platforms, you need: 1. A Globus account (see [Initial Globus Sign-Up and Globus -ID](https://support.nesi.org.nz/hc/en-gb/articles/360000817476)) + ID](https://support.nesi.org.nz/hc/en-gb/articles/360000817476)) 2. An active NeSI account (see [Creating a NeSI -Account](https://support.nesi.org.nz/hc/en-gb/articles/360000159715)) + Account](https://support.nesi.org.nz/hc/en-gb/articles/360000159715)) 3. Access privileges on the non-NeSI Globus endpoint/collection you -plan on transferring data from or to. This other endpoint/collection -could be a personal one on your workstation, or it could be managed -by your institution or a third party. + plan on transferring data from or to. This other endpoint/collection + could be a personal one on your workstation, or it could be managed + by your institution or a third party. - *Note that a NeSI user account does not create a Globus account, and -similarly a Globus account does not create a NeSI user account. Nor -can you, as the end user, link the two through any website.* + similarly a Globus account does not create a NeSI user account. Nor + can you, as the end user, link the two through any website.* Both your accounts (NeSI and Globus) must exist before you try to use our DTN. @@ -109,7 +109,7 @@ step. ![mceclip0.png](../../assets/images/Data_Transfer_using_Globus_V5.png) - +  You can choose either of **<username>@wlg-dtn-oidc.nesi.org.nz** or NeSI Wellington OIDC Server (wlg-dtn-oidc.nesi.org.nz), they are all @@ -117,7 +117,7 @@ linked to the same website. If this is your first time login, you may ask to *bind* your primary identity to the OIDC login, you need to allow that. - +  ![mceclip1.png](../../assets/images/Data_Transfer_using_Globus_V6.png) @@ -130,7 +130,7 @@ authentication (2FA-same as accessing NeSI clusters).  In the not*** use any additional characters or spaces between your password and the token number.) - +                            ![mceclip0.png](../../assets/images/Data_Transfer_using_Globus_V7.png) After the login, you will navigate to the default root(display as "/") @@ -139,13 +139,13 @@ path, then you could change the path to \(1\) your ***/home/<username>*** directory, \(2\) project directory (read-only) -***/nesi/project/<project\_code>*** +***/nesi/project/<project\_code>***  \(3\) project sub-directories of ***/nesi/nobackup/<project\_code>***  - see [Globus Paths, Permissions,  Storage -Allocation](https://support.nesi.org.nz/hc/en-gb/articles/4405623499791-Globus-V5-Paths-Permissions-Storage-Allocation). - +Allocation](https://support.nesi.org.nz/hc/en-gb/articles/4405623499791-Globus-V5-Paths-Permissions-Storage-Allocation). + Navigate to your selected directory. e.g. the *nobackup* filesystem */nesi/nobackup/<project\_code>* and select the two-endpoint panel for transfer. @@ -160,11 +160,11 @@ start transferring files between them. ![mceclip4.png](../../assets/images/Data_Transfer_using_Globus_V9.png) Select files you wish to transfer and select the corresponding "Start" -button: - +button: + ![mceclip5.png](../../assets/images/Data_Transfer_using_Globus_V10.png) - +  To find other NeSI endpoints, type in "nesi#": @@ -173,27 +173,27 @@ To find other NeSI endpoints, type in "nesi#": ## In brief: - Sign in to the NeSI Globus Web App . -You will be taken to the *File Manager* page - + You will be taken to the *File Manager* page + - If this is your first time, you will need to create a Globus -account. + account. - Open the two-endpoint panel -![two\_endpoint.png](../../assets/images/Data_Transfer_using_Globus_V12.png)located -on the top-right of the *File Manager* page. + ![two\_endpoint.png](../../assets/images/Data_Transfer_using_Globus_V12.png)located + on the top-right of the *File Manager* page. - Select the Endpoints you wish to move files between (start typing -"nesi#" to see the list of NeSI DTNs to select from). -[Authenticate](https://support.nesi.org.nz/hc/en-gb/articles/4405630948495) -at both endpoints. + "nesi#" to see the list of NeSI DTNs to select from). + [Authenticate](https://support.nesi.org.nz/hc/en-gb/articles/4405630948495) + at both endpoints. - At Globus.org the** **endpoint **defaults to -"/home/<username>" path** (represented by "/~/") on Mahuika or -Māui. We do not recommend uploading data to your home directory, as -home directories are very small. Instead, navigate to an appropriate -project directory under /nobackup (see [Globus Paths, Permissions, -Storage -Allocation](https://support.nesi.org.nz/hc/en-gb/articles/4405623499791-Globus-V5-Paths-Permissions-Storage-Allocation)). + "/home/<username>" path** (represented by "/~/") on Mahuika or + Māui. We do not recommend uploading data to your home directory, as + home directories are very small. Instead, navigate to an appropriate + project directory under /nobackup (see [Globus Paths, Permissions, + Storage + Allocation](https://support.nesi.org.nz/hc/en-gb/articles/4405623499791-Globus-V5-Paths-Permissions-Storage-Allocation)). - Transfer the files by clicking the appropriate -![start.png](../../assets/images/Data_Transfer_using_Globus_V13.png)button -depending on the direction of the transfer. + ![start.png](../../assets/images/Data_Transfer_using_Globus_V13.png)button + depending on the direction of the transfer. - Check your email for confirmation about the job completion report. ## Transferring data using a personal endpoint @@ -210,7 +210,7 @@ transfers between personal endpoints). To share files with others outside your filesystem, see  - +  ## Using Globus to transfer data to or from the cloud diff --git a/docs/Storage/Data_Transfer_Services/Data_transfer_between_NeSI_and_a_PC_without_NeSI_two_factor_authentication.md b/docs/Storage/Data_Transfer_Services/Data_transfer_between_NeSI_and_a_PC_without_NeSI_two_factor_authentication.md index c06814e1c..9f722f9d4 100644 --- a/docs/Storage/Data_Transfer_Services/Data_transfer_between_NeSI_and_a_PC_without_NeSI_two_factor_authentication.md +++ b/docs/Storage/Data_Transfer_Services/Data_transfer_between_NeSI_and_a_PC_without_NeSI_two_factor_authentication.md @@ -44,16 +44,16 @@ have registered and created an account on Globus. - Go to - In the "Collection" search box type **NeSI Wellington DTN V5** and -select this collection + select this collection - *You may then need to log onto NeSI DTN to see the files* - Find the root folder of your guest collection, the directory you -would like to share, and -- click on the “Share” button, -- click on “Add Guest Collection” -- provide a "Display Name" -- press on "Create Collection" + would like to share, and + - click on the “Share” button, + - click on “Add Guest Collection” + - provide a "Display Name" + - press on "Create Collection" - You should now see your new guest collection at - + ![mceclip0.png](../../assets/images/Data_transfer_between_NeSI_and_a_PC_without_NeSI_two_factor_authentication.png) @@ -67,11 +67,11 @@ Configuration](https://support.nesi.org.nz/hc/en-gb/articles/360000217915). ## Step 3: Share a directory on your personal computer -- Launch "Globus Connect Personal" and go to "Preferences". +- Launch "Globus Connect Personal" and go to "Preferences".  - Select "Access" -- click on the "+" sign to share a new directory -- navigate your directory and press "Open" -- make the directory writable + - click on the "+" sign to share a new directory + - navigate your directory and press "Open" + - make the directory writable Note: By default your entire home directory will be exposed. It is good practice to only share specific directories. You can remove your home @@ -84,9 +84,9 @@ directory by highlighting it and clicking on the "-" sign. - Go to [https://app.globus.org](https://app.globus.org/collections) - Log in - In the "FILE MANAGER" tab, type the source and destination -collections. The source path should be relative to the guest -collection root. However, the destination path is absolute, as can -be seen in the picture below. + collections. The source path should be relative to the guest + collection root. However, the destination path is absolute, as can + be seen in the picture below. - Click on the files you want to transfer and press "Start" ![mceclip3.png](../../assets/images/Data_transfer_between_NeSI_and_a_PC_without_NeSI_two_factor_authentication_1.png) \ No newline at end of file diff --git a/docs/Storage/Data_Transfer_Services/Download_and_share_CMIP6_data_for_NIWA_researchers.md b/docs/Storage/Data_Transfer_Services/Download_and_share_CMIP6_data_for_NIWA_researchers.md index 9c8f94b0f..90184c226 100644 --- a/docs/Storage/Data_Transfer_Services/Download_and_share_CMIP6_data_for_NIWA_researchers.md +++ b/docs/Storage/Data_Transfer_Services/Download_and_share_CMIP6_data_for_NIWA_researchers.md @@ -64,7 +64,7 @@ synda -h Below we demonstrate how synda might be used. -## Find some datasets +## Find some datasets  CMIP6 datasets are organised by institution\_id, experiment\_id, variable etc. A full list can be glanced @@ -83,9 +83,9 @@ new  CMIP6.CMIP.NCAR.CESM2-WACCM.1pctCO2.r1i1p1f1.Amon.ta.gn.v20190425 ... -as well as some other datasets. - +as well as some other datasets.  +  ## Find out how big the datasets are @@ -101,7 +101,7 @@ This prints "Total files count: 16, New files count: 16, Total size: have not yet been downloaded. You can see that there are 16 files to download, taking nearly 50GB of disk space. -## Download/install the dataset +## Download/install the dataset  ``` sl synda install CMIP6.CMIP.NCAR.CESM2-WACCM.1pctCO2.r1i1p1f1.day.ta.gn.v20190425 @@ -120,7 +120,7 @@ back later to check progress. The data will end up under $ST\_HOME/data/CMIP6/CMIP/NCAR/CESM2-WACCM/1pctCO2/r1i1p1f1/day/ta/gn/v20190425 -in this case. +in this case.  You can type diff --git a/docs/Storage/Data_Transfer_Services/Globus_Quick_Start_Guide.md b/docs/Storage/Data_Transfer_Services/Globus_Quick_Start_Guide.md index f418ee019..9b6612c2b 100644 --- a/docs/Storage/Data_Transfer_Services/Globus_Quick_Start_Guide.md +++ b/docs/Storage/Data_Transfer_Services/Globus_Quick_Start_Guide.md @@ -29,11 +29,11 @@ data to or from NeSI, you need: 1. A NeSI account 2. A Globus account -3. Access to Globus DTNs or endpoint -- Access to a DTN (e.g., at your home institution) -- Personal endpoint if no DTN is available - +3. Access to Globus DTNs or endpoint + - Access to a DTN (e.g., at your home institution) + - Personal endpoint if no DTN is available +  ## Globus Account @@ -53,12 +53,12 @@ For more detailed instructions please see [Initial Globus Sign-Up, and your Globus Identities](https://support.nesi.org.nz/hc/en-gb/articles/360000817476). - +  ## Globus Endpoint Activation A NeSI account is required in addition to a Globus account to transfer -data to or from NeSI facilities. * +data to or from NeSI facilities. * * To transfer data, between two sites, you need to have access to a DTN or @@ -80,19 +80,19 @@ To activate the NeSI endpoint click go to bar on the left. 1. Next to "Collection", search for "NeSI Wellington DTN V5", select -it, then click "Continue". + it, then click "Continue". 2. In the 'Username**'** field, enter your NeSI HPC username. In the -'Password**'** field, the password is -`Login Password (First Factor)` + -`Authenticator Code (Second Factor)` e.g. `password123456`. Please -**do not** save your password on "*Browser settings*" as it will -change every time due to the 2nd factor requirement. + 'Password**'** field, the password is + `Login Password (First Factor)` + + `Authenticator Code (Second Factor)` e.g. `password123456`. Please + **do not** save your password on "*Browser settings*" as it will + change every time due to the 2nd factor requirement. ![NeSI\_Globus\_Authenticate.png](../../assets/images/Globus_Quick_Start_Guide_0.png) +  - - +  ## Transferring Data @@ -107,12 +107,13 @@ on the right, to the location on the left. To see the progress of the transfer, please click 'Activity' on the left hand menu bar. - +  If you have any questions or issues using Globus to transfer data to or from NeSI, email [support@nesi.org.nz](https://support@nesi.org.nz). +  +  - - +  \ No newline at end of file diff --git a/docs/Storage/Data_Transfer_Services/Globus_V5_Paths-Permissions-Storage_Allocation.md b/docs/Storage/Data_Transfer_Services/Globus_V5_Paths-Permissions-Storage_Allocation.md index c0643e042..c7c7cccf8 100644 --- a/docs/Storage/Data_Transfer_Services/Globus_V5_Paths-Permissions-Storage_Allocation.md +++ b/docs/Storage/Data_Transfer_Services/Globus_V5_Paths-Permissions-Storage_Allocation.md @@ -35,7 +35,7 @@ path directory, displayed as '`/home/`'. | `/nesi/nobackup/` | yes | `/nesi/nobackup/` | yes | read and write access | | `/nesi/project/` | yes | `/nesi/project/` | yes | **read only** access | - +  For more information about NeSI filesystem, check [here](https://support.nesi.org.nz/hc/en-gb/articles/360000177256-NeSI-File-Systems-and-Quotas). @@ -43,16 +43,16 @@ For more information about NeSI filesystem, check ## Performing Globus transfers to/from Māui/Mahuika - If transferring files off the cluster, move/copy files onto -`/nesi/project` or `/nesi/nobackup` first, via your HPC access + `/nesi/project` or `/nesi/nobackup` first, via your HPC access - Sign in to Globus and navigate the file manager to the path -associated with your project (viz. `/nesi/project/` or -`/nesi/nobackup/`) + associated with your project (viz. `/nesi/project/` or + `/nesi/nobackup/`) - Click the "two-panels" area in the file manager and select the other -endpoint + endpoint - Select source of transfer - Transfer data (from), using the appropriate "start" button - If transferring files onto the cluster, the fastest location will be -`/nesi/nobackup/` + `/nesi/nobackup/` ### Tips @@ -60,21 +60,22 @@ endpoint `/nesi/nobackup` paths and these bookmarks pinned. 2.  Symbolic links can be created in your *project* directories and -*nobackup* directories to enable easy moving of files to and from. +*nobackup* directories to enable easy moving of files to and from. To create a symbolic link from a first to a second directory and vice-versa (using *full* paths for <first> and <second>): ``` sl $ cd $ ln -s - + $ cd $ ln -s ``` Alias can be any value which is convenient to you. .i.e. easy to -identify +identify After you do this, there will be an alias listed in each directory that points to the other directory. You can see this with the **ls** command, and **cd** from each to the other using its alias. +  \ No newline at end of file diff --git a/docs/Storage/Data_Transfer_Services/Globus_V5_endpoint_activation.md b/docs/Storage/Data_Transfer_Services/Globus_V5_endpoint_activation.md index 12f2cf401..63663a5fb 100644 --- a/docs/Storage/Data_Transfer_Services/Globus_V5_endpoint_activation.md +++ b/docs/Storage/Data_Transfer_Services/Globus_V5_endpoint_activation.md @@ -19,17 +19,17 @@ zendesk_section_id: 360000040596 [//]: <> (^^^^^^^^^^^^^^^^^^^^) [//]: <> (REMOVE ME IF PAGE VALIDATED) - +  ## Activating an Endpoint When you select an endpoint to transfer data to/from, you may be asked to authenticate with that endpoint: -![mceclip0.png](../../assets/images/Globus_V5_endpoint_activation.png) +![mceclip0.png](../../assets/images/Globus_V5_endpoint_activation.png) Transfers are only possible once you have supplied credentials that authenticate your access to the endpoint. This process is known as -"activating the endpoint".  The endpoint remains active for 24 hours. +"activating the endpoint".  The endpoint remains active for 24 hours.   The NeSI Wellington DTN V5 endpoint is protected by a second factor authentication (2FA-same as accessing NeSI clusters).  In the @@ -40,8 +40,8 @@ authentication (2FA-same as accessing NeSI clusters).  In the not*** use any additional characters or spaces between your password and the token number.) - -![mceclip0.png](../../assets/images/Globus_V5_endpoint_activation_0.png) +                      + ![mceclip0.png](../../assets/images/Globus_V5_endpoint_activation_0.png) Check the status of your endpoints at [ ](https://www.globus.org/app/console/endpoints) @@ -52,5 +52,6 @@ If a transfer is in progress and will not finish in time before your credentials expire, that transfer will pause and you will need to reauthenticate for it to continue. +  - +  \ No newline at end of file diff --git a/docs/Storage/Data_Transfer_Services/Initial_Globus_Sign_Up-and_your_Globus_Identities.md b/docs/Storage/Data_Transfer_Services/Initial_Globus_Sign_Up-and_your_Globus_Identities.md index 84425a471..c6a809108 100644 --- a/docs/Storage/Data_Transfer_Services/Initial_Globus_Sign_Up-and_your_Globus_Identities.md +++ b/docs/Storage/Data_Transfer_Services/Initial_Globus_Sign_Up-and_your_Globus_Identities.md @@ -19,7 +19,7 @@ zendesk_section_id: 360000040596 [//]: <> (^^^^^^^^^^^^^^^^^^^^) [//]: <> (REMOVE ME IF PAGE VALIDATED) - +  Globus provides logins for NeSI users via their organisation, GitHub, Google or GlobusID. @@ -52,13 +52,13 @@ If you have other identities in Globus (for example, a globusID), link them  to your Google ID account following the instructions at : -![identities.png](../../assets/images/Initial_Globus_Sign_Up-and_your_Globus_Identities_0.png) - - + ![identities.png](../../assets/images/Initial_Globus_Sign_Up-and_your_Globus_Identities_0.png) +  +  -Note: +Note:  If you had a Globus account before February 2016, that account ID is now your "GlobusID". @@ -66,19 +66,20 @@ your "GlobusID". Your groups and data-shares are associated with your login so you should ensure that your primary identity is the login you will generally use. +  +  +  +  +  +  +  +  - - - - - - - - +  \ No newline at end of file diff --git a/docs/Storage/Data_Transfer_Services/National_Data_Transfer_Platform.md b/docs/Storage/Data_Transfer_Services/National_Data_Transfer_Platform.md index d40a52412..035e80d58 100644 --- a/docs/Storage/Data_Transfer_Services/National_Data_Transfer_Platform.md +++ b/docs/Storage/Data_Transfer_Services/National_Data_Transfer_Platform.md @@ -120,7 +120,7 @@ class="sl">PFR Globus Connect Server
Plant & Food Research data  Generally for internal users, but also for sharing large datasets with collaborators
-
+    Contact the Plant and Food person you are wanting to share data with.   @@ -170,7 +170,7 @@ href="mailto:support@nesi.org.nz">support@nesi.org.nz - +  ## How to establish a New Zealand node diff --git a/docs/Storage/Data_Transfer_Services/Personal_Globus_Endpoint_Configuration.md b/docs/Storage/Data_Transfer_Services/Personal_Globus_Endpoint_Configuration.md index 0d4efa5ac..c3ca803d1 100644 --- a/docs/Storage/Data_Transfer_Services/Personal_Globus_Endpoint_Configuration.md +++ b/docs/Storage/Data_Transfer_Services/Personal_Globus_Endpoint_Configuration.md @@ -54,7 +54,7 @@ this page, then: 1. Click the "Add Globus Plus Sponsor" link. 2. Select "New Zealand eScience Infrastructure" from the list of -potential sponsors. + potential sponsors. 3. Follow the on-screen instructions. Once you have completed the process, your request to join the group will diff --git a/docs/Storage/Data_Transfer_Services/Re_creating_Shared_Collections_and_Bookmarks_in_Globus_V5.md b/docs/Storage/Data_Transfer_Services/Re_creating_Shared_Collections_and_Bookmarks_in_Globus_V5.md index 6207d00fb..96f150b32 100644 --- a/docs/Storage/Data_Transfer_Services/Re_creating_Shared_Collections_and_Bookmarks_in_Globus_V5.md +++ b/docs/Storage/Data_Transfer_Services/Re_creating_Shared_Collections_and_Bookmarks_in_Globus_V5.md @@ -22,8 +22,8 @@ zendesk_section_id: 360000040596 Shared Collections created in the previous NeSI endpoint **NeSI Wellington DTN ** need to be re-created in the new endpoint **NeSI Wellington DTN V5.** (The Shared Collections have been renamed *Guest -Collections*). - +Collections*).   + ## Guest Collections @@ -33,42 +33,43 @@ Instructions on creating and sharing Guest Collections are available In summary: 1. To re-create existing Collections, select *Share* and *Create Guest -Collection - -![globus14.jpg](../../assets/images/Re_creating_Shared_Collections_and_Bookmarks_in_Globus_V5.jpg) - -* + Collection + + ![globus14.jpg](../../assets/images/Re_creating_Shared_Collections_and_Bookmarks_in_Globus_V5.jpg) + + * 2. Enter the [file -path](https://support.nesi.org.nz/hc/en-gb/articles/4405623499791) -of the directory to be shared. - -![globus10.jpg](../../assets/images/Re_creating_Shared_Collections_and_Bookmarks_in_Globus_V6.jpg) - -This can also be copied from your existing Shared Collection on -*NeSI Wellington DTN - -![globus07.jpg](../../assets/images/Re_creating_Shared_Collections_and_Bookmarks_in_Globus_V7.jpg) - -* + path](https://support.nesi.org.nz/hc/en-gb/articles/4405623499791) + of the directory to be shared. + + ![globus10.jpg](../../assets/images/Re_creating_Shared_Collections_and_Bookmarks_in_Globus_V6.jpg) + + This can also be copied from your existing Shared Collection on + *NeSI Wellington DTN + + ![globus07.jpg](../../assets/images/Re_creating_Shared_Collections_and_Bookmarks_in_Globus_V7.jpg) + + * 3. Add Permissions for an individual or a Group (existing, or create a -new group) - -![globus11.jpg](../../assets/images/Re_creating_Shared_Collections_and_Bookmarks_in_Globus_V8.jpg) - + new group) + + ![globus11.jpg](../../assets/images/Re_creating_Shared_Collections_and_Bookmarks_in_Globus_V8.jpg) + 4. Users you share with will receive an email notification containing a -link to the new *Guest Collection*. + link to the new *Guest Collection*. ## Bookmarks 1. Create bookmarks to **NeSI Wellington DTN V5** and new Guest -Collections - -![globus13.jpg](../../assets/images/Re_creating_Shared_Collections_and_Bookmarks_in_Globus_V9.jpg) - + Collections + + ![globus13.jpg](../../assets/images/Re_creating_Shared_Collections_and_Bookmarks_in_Globus_V9.jpg) + 2. Bookmarks to *NeSI Wellington DTN* and Shared Collections on *NeSI -Wellington DTN* should be deleted. - -![globus12.jpg](../../assets/images/Re_creating_Shared_Collections_and_Bookmarks_in_Globus_V10.jpg) - + Wellington DTN* should be deleted. +![globus12.jpg](../../assets/images/Re_creating_Shared_Collections_and_Bookmarks_in_Globus_V10.jpg) + + +  \ No newline at end of file diff --git a/docs/Storage/Data_Transfer_Services/Syncing_files_between_NeSI_and_another_computer_with_globus_automate.md b/docs/Storage/Data_Transfer_Services/Syncing_files_between_NeSI_and_another_computer_with_globus_automate.md index 12157f4ec..326c42993 100644 --- a/docs/Storage/Data_Transfer_Services/Syncing_files_between_NeSI_and_another_computer_with_globus_automate.md +++ b/docs/Storage/Data_Transfer_Services/Syncing_files_between_NeSI_and_another_computer_with_globus_automate.md @@ -46,45 +46,45 @@ content: ``` sl { -"source_endpoint_id": "ENDPOINT1", -"destination_endpoint_id": "ENDPOINT2", -"transfer_items": [ -{ -"source_path": "SOURCE_FOLDER", -"destination_path": "DESTINATION_FOLDER", -"recursive": true -} -], -"sync_level": SYNC_LEVEL, -"notify_on_succeeded": true, -"notify_on_failed": true, -"notify_on_inactive": true, -"verify_checksum": true + "source_endpoint_id": "ENDPOINT1", + "destination_endpoint_id": "ENDPOINT2", + "transfer_items": [ + { + "source_path": "SOURCE_FOLDER", + "destination_path": "DESTINATION_FOLDER", + "recursive": true + } + ], + "sync_level": SYNC_LEVEL, + "notify_on_succeeded": true, + "notify_on_failed": true, + "notify_on_inactive": true, + "verify_checksum": true } ``` where - `ENDPOINT1` is the source endpoint UUID, which you can get - by clicking on the collection -of your choice. Using a guest collection will allow you to transfer -the data without two-factor authentication + by clicking on the collection + of your choice. Using a guest collection will allow you to transfer + the data without two-factor authentication - `ENDPOINT2` is the destination UUID, e.g. your personal endpoint -UUID, which may be for your private mapped collection if you're -transferring to your personal computer + UUID, which may be for your private mapped collection if you're + transferring to your personal computer - `SOURCE_FOLDER` is the **relative** path of the source folder in the -source endpoint. This is a directory, it cannot be a file. Use "/" -if you do not intend to transfer the data from sub-directories + source endpoint. This is a directory, it cannot be a file. Use "/" + if you do not intend to transfer the data from sub-directories - `DESTINATION_FOLDER` is the **absolute** path of the destination -folder in the destination endpoint when the destination is a private -mapped collection + folder in the destination endpoint when the destination is a private + mapped collection - `SYNC_LEVEL` specifies the synchronisation level in the range 0-3. -`SYNC_LEVEL=0` will transfer new files that do not exist on -destination. Leaving this setting out will overwrite all the files -on destination. Click -[here](https://docs.globus.org/api/transfer/task_submit/#transfer_specific_fields) -to see how other sync\_level settings can be used to update data in -the destination directory based on modification time and checksums. + `SYNC_LEVEL=0` will transfer new files that do not exist on + destination. Leaving this setting out will overwrite all the files + on destination. Click + [here](https://docs.globus.org/api/transfer/task_submit/#transfer_specific_fields) + to see how other sync\_level settings can be used to update data in + the destination directory based on modification time and checksums. ## Step 2: Initiate the transfer @@ -98,7 +98,7 @@ then start the transfer using ``` sl globus-automate action run --action-url https://actions.globus.org/transfer/transfer \ ---body transfer_input.json + --body transfer_input.json ``` The first printed line will display the `ACTION_ID`. You can monitor @@ -106,7 +106,7 @@ progress with ``` sl globus-automate action status --action-url \ -https://actions.globus.org/transfer/transfer ACTION_ID + https://actions.globus.org/transfer/transfer ACTION_ID ``` or on the web at . \ No newline at end of file diff --git a/docs/Storage/File_Systems_and_Quotas/Automatic_cleaning_of_nobackup_file_system.md b/docs/Storage/File_Systems_and_Quotas/Automatic_cleaning_of_nobackup_file_system.md index 4c8a6abfb..a18306e79 100644 --- a/docs/Storage/File_Systems_and_Quotas/Automatic_cleaning_of_nobackup_file_system.md +++ b/docs/Storage/File_Systems_and_Quotas/Automatic_cleaning_of_nobackup_file_system.md @@ -28,54 +28,54 @@ large-scale compute and analytics workflows. Files are deleted if they meet **all** of the following criteria: - The file was first created more than 120 days ago, and has not been -accessed, and neither its data nor its metadata has been modified, -for at least 120 days. + accessed, and neither its data nor its metadata has been modified, + for at least 120 days. - The file was identified as a candidate for deletion two weeks -previously, and as such is listed in a the project's -nobackup `.policy` directory. + previously, and as such is listed in a the project's + nobackup `.policy` directory. !!! prerequisite Tip -You can get a list of files marked for deletion with the command -`nn_doomed_list`. -Usage: nn\_doomed\_list \[-h\] \[--project \[PROJECTS\]\] -\[--unlimited\] \[--limit LENGTHLIMIT\] -optional arguments: --h, --help show this help message and exit ---project \[PROJECTS\], -p \[PROJECTS\] -Comma-separated list of projects to process. If not given, process all -projects of which the user is a member ---unlimited, -u Do not limit the length of the output file ---limit LENGTHLIMIT, -l LENGTHLIMIT -Maximum length of the output file (lines) -If no arguments are given, nn\_doomed\_list checks and displays all -project directories the user is a member of. -Default limit of the output file is 40 lines. + You can get a list of files marked for deletion with the command + `nn_doomed_list`. + Usage: nn\_doomed\_list \[-h\] \[--project \[PROJECTS\]\] + \[--unlimited\] \[--limit LENGTHLIMIT\] + optional arguments: + -h, --help show this help message and exit + --project \[PROJECTS\], -p \[PROJECTS\] + Comma-separated list of projects to process. If not given, process all + projects of which the user is a member + --unlimited, -u Do not limit the length of the output file + --limit LENGTHLIMIT, -l LENGTHLIMIT + Maximum length of the output file (lines) + If no arguments are given, nn\_doomed\_list checks and displays all + project directories the user is a member of.  + Default limit of the output file is 40 lines.  The general process will follow a schedule as follows: - **Notify** (at 106 days), then two weeks later **Delete** (at 120 -days). + days). - Every fortnight on Tuesday morning, we will be reviewing files -stored in the nobackup filesystem and identifying candidates for -expiry. + stored in the nobackup filesystem and identifying candidates for + expiry. - Project teams will be notified by email if they have file candidates -for deletion. Emails will be sent two weeks in advance of any -deletion taking place. + for deletion. Emails will be sent two weeks in advance of any + deletion taking place. !!! prerequisite Warning -Due to the nature of email, we cannot guarantee that any -particular email message will be successfully delivered and -received, for instance our emails could be blocked by your mail -server or your inbox could be too full. We suggest that you check -`/nesi/nobackup//.policy` (see below) for a list of -deletion candidates, for each of your projects, whether you -received an email from us or not. + Due to the nature of email, we cannot guarantee that any + particular email message will be successfully delivered and + received, for instance our emails could be blocked by your mail + server or your inbox could be too full. We suggest that you check + `/nesi/nobackup//.policy` (see below) for a list of + deletion candidates, for each of your projects, whether you + received an email from us or not. - Immediately after deletion is complete, a new set of candidate files -will be identified for expiry during the next automated cleanup. -These candidate files are all files within the project's nobackup -that have not been created, accessed or modified within the last 106 -days. + will be identified for expiry during the next automated cleanup. + These candidate files are all files within the project's nobackup + that have not been created, accessed or modified within the last 106 + days. A file containing the list of candidates for deletion during the next cleanup, along with the date of the next cleanup, will be created in a @@ -88,11 +88,11 @@ or modify those contents). The gzip compressed filelist can be viewed and searched with the `zless` and `zgrep` commands respectively, e.g., `zless /nesi/nobackup/nesi12345/.policy/to_delete/.filelist.gz`. !!! prerequisite Warning -Objects other than files, such as directories and symbolic links, are -not deleted under this policy, even if at deletion time they are -empty, broken, or otherwise redundant. These entities typically take -up no disk space apart from a small amount of metadata, but still -count towards the project's inode (file count) quota. + Objects other than files, such as directories and symbolic links, are + not deleted under this policy, even if at deletion time they are + empty, broken, or otherwise redundant. These entities typically take + up no disk space apart from a small amount of metadata, but still + count towards the project's inode (file count) quota. ## What should I do with expiring data on the nobackup filesystem? @@ -104,33 +104,33 @@ If you have files identified as candidates for deletion that you need to keep beyond the scheduled expiry date, you have four options: - Move the file to your persistent project directory, -e.g., `/nesi/project/nesi12345`. You may need to request more disk -space, more inodes, or both, in your persistent project directory -before you can do this. [Submit a Support -request](https://support.nesi.org.nz/hc/en-gb/requests/new). We -assess such requests on a case-by-case basis.  Note:  You can save -space by compressing data.  Standard tools such as \`gzip\` -\`bzip2\` etc are available. + e.g., `/nesi/project/nesi12345`. You may need to request more disk + space, more inodes, or both, in your persistent project directory + before you can do this. [Submit a Support + request](https://support.nesi.org.nz/hc/en-gb/requests/new). We + assess such requests on a case-by-case basis.  Note:  You can save + space by compressing data.  Standard tools such as \`gzip\` + \`bzip2\` etc are available. - Move or copy the file to a storage system outside NeSI, for example -a research storage device at your institution. We expect most -projects to do this for finalised output data and appreciate prompt -egress of data once it is no longer used for processing. + a research storage device at your institution. We expect most + projects to do this for finalised output data and appreciate prompt + egress of data once it is no longer used for processing. - **Modify** the file before the deletion date, in which case the file -will not be deleted even though it is listed in `.policy`. This must -only be done in cases where you expect to begin active use of the -data again within the next month. + will not be deleted even though it is listed in `.policy`. This must + only be done in cases where you expect to begin active use of the + data again within the next month. - Note: Accessing (Open/Close and Open/Save) or Moving (\`mv\`) does -not update the timestamp of the file. Copying (\`cp\`) does create a -new timestamped file. + not update the timestamp of the file. Copying (\`cp\`) does create a + new timestamped file. !!! prerequisite Warning -Doing this for large numbers of files, or for files that together -take up a large amount of disk space, in your project's nobackup -directory, without regard for your project's computational -activity, constitutes a breach of [NeSI's acceptable use -policy](https://www.nesi.org.nz/services/high-performance-computing/guidelines/acceptable-use-policy). + Doing this for large numbers of files, or for files that together + take up a large amount of disk space, in your project's nobackup + directory, without regard for your project's computational + activity, constitutes a breach of [NeSI's acceptable use + policy](https://www.nesi.org.nz/services/high-performance-computing/guidelines/acceptable-use-policy). ## Where should I put my data? @@ -169,8 +169,8 @@ appropriate combination of: - persistent project storage on NeSI, - high performance /nobackup storage (temporary scratch space) on -NeSI, -- slow nearline storage (not released yet, on our roadmap), and + NeSI, +- slow nearline storage (not released yet, on our roadmap), and  - institutional storage infrastructure. ## User Webinars @@ -179,16 +179,16 @@ On 14 and 26 November 2019, we hosted webinars to explain these upcoming changes and answer user questions. If you missed these sessions, the archived materials are available at the links below: -- ***Video recordings: *** -14 November 2019 - -26 November 2019 *(repeat of 14 Nov session)* --  -- ***Slides: *** -*(same slides were used for both presentations)* - -- ***Q&A transcriptions: *** -14 November 2019 --  -26 November 2019 --  +- ***Video recordings: *** + 14 November 2019 -   + 26 November 2019 *(repeat of 14 Nov session)* + -  +- ***Slides: *** + *(same slides were used for both presentations)* +    +- ***Q&A transcriptions: *** + 14 November 2019 + -   + 26 November 2019 + -  diff --git a/docs/Storage/File_Systems_and_Quotas/Data_Compression.md b/docs/Storage/File_Systems_and_Quotas/Data_Compression.md index 8d0df0eca..eedaf7be2 100644 --- a/docs/Storage/File_Systems_and_Quotas/Data_Compression.md +++ b/docs/Storage/File_Systems_and_Quotas/Data_Compression.md @@ -139,28 +139,28 @@ re-compressed using the `mmchattr --compression yes` command or the ### The different states - **Uncompressed** and **untagged** for compression (default) - as -shown for the file `FileA.txt` above. + shown for the file `FileA.txt` above. - **Partially compressed** and **tagged** for compression - When file -is partially compressed (either because it was decompressed for -access or the full compression didn’t finish). It is still marked -for compression as the `COMPRESSION` misc attribute suggests, but -because it's not fully compressed the `illcompressed` flag will be -shown. + is partially compressed (either because it was decompressed for + access or the full compression didn’t finish). It is still marked + for compression as the `COMPRESSION` misc attribute suggests, but + because it's not fully compressed the `illcompressed` flag will be + shown. - **Fully compressed** and **tagged** for compression - The file is -fully compressed to its maximum possible state and because the file -is tagged for compression, only the misc attribute `COMPRESSION` -will be shown. + fully compressed to its maximum possible state and because the file + is tagged for compression, only the misc attribute `COMPRESSION` + will be shown. - **Full or partially compressed** and **untagged** for compression - -The file might be fully or partially compressed and in this case -because the misc attribute `COMPRESSION` is not shown, it means the -file is untagged for being compressed (meaning it's tagged to be in -the uncompressed state). When a fully compressed file is untagged, -the flag `illcompressed` will be shown. After full decompression is -complete the file will become uncompressed and untagged for -compression. + The file might be fully or partially compressed and in this case + because the misc attribute `COMPRESSION` is not shown, it means the + file is untagged for being compressed (meaning it's tagged to be in + the uncompressed state). When a fully compressed file is untagged, + the flag `illcompressed` will be shown. After full decompression is + complete the file will become uncompressed and untagged for + compression. ## Using different compression algorithms @@ -173,15 +173,15 @@ Currently supported compression libraries are: - z Cold data. Favours compression efficiency over access speed. - lz4 Active, non-specific data. Favours access speed over compression -efficiency. + efficiency. ## Performance impacts Experiments showed that I/O performance was definitely affected if a file was in a compressed state. The extent of the effect, however, depends on the magnitude of I/O operations on the affected files.  I/O -intensive workloads may experience a significant performance drop. - +intensive workloads may experience a significant performance drop. + If compression has a significant impact on your software performance, please confirm it first by running a test job with and without compression and then contact us at . We will help diff --git a/docs/Storage/File_Systems_and_Quotas/File_permissions_and_groups.md b/docs/Storage/File_Systems_and_Quotas/File_permissions_and_groups.md index efe9f7c6b..0342a98ea 100644 --- a/docs/Storage/File_Systems_and_Quotas/File_permissions_and_groups.md +++ b/docs/Storage/File_Systems_and_Quotas/File_permissions_and_groups.md @@ -20,12 +20,12 @@ zendesk_section_id: 360000033936 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite See also -- [How can I let my fellow project team members read or write my -files?](https://support.nesi.org.nz/hc/en-gb/articles/360001237915) -- [How can I give read-only team members access to my -files?](https://support.nesi.org.nz/hc/en-gb/articles/4401821809679) -- [NeSI file systems and -quotas](https://support.nesi.org.nz/hc/en-gb/articles/360000177256) + - [How can I let my fellow project team members read or write my + files?](https://support.nesi.org.nz/hc/en-gb/articles/360001237915) + - [How can I give read-only team members access to my + files?](https://support.nesi.org.nz/hc/en-gb/articles/4401821809679) + - [NeSI file systems and + quotas](https://support.nesi.org.nz/hc/en-gb/articles/360000177256) Access to data (i.e. files and directories) on NeSI is controlled by POSIX permissions, supplemented with Access Control Lists (ACLs). @@ -39,11 +39,11 @@ as: - A group for each active NeSI project of which that user is a member - Groups for all active users, all active Mahuika users, all active -Māui users, etc. as appropriate + Māui users, etc. as appropriate - A group representing all active NeSI users who are affiliated with -the user's institution + the user's institution - Groups for specific licensed software to which that user has been -granted access + granted access You can see which groups you are a member of at any time by running the following command on a Mahuika, Māui or Māui ancillary login node: @@ -63,15 +63,15 @@ system, inherit this ownership scheme. You can override these defaults depending on how you use the `cp`, `scp`, `rsync`, etc. commands. Please consult the documentation for your copying program. !!! prerequisite Warning -If you choose to preserve the original owner and group, but that owner -and group (name or numeric ID) don't both exist at the destination, -your files may end up with odd permissions that you can't fix, for -example if you're copying from your workstation to NeSI. + If you choose to preserve the original owner and group, but that owner + and group (name or numeric ID) don't both exist at the destination, + your files may end up with odd permissions that you can't fix, for + example if you're copying from your workstation to NeSI. The default permissions mode for new home directories is as follows: - The owner has full privileges: read, write, and (where appropriate) -execute. + execute. - The group and world have no privileges. Some home directories have the "setgid" bit set. This has the effect @@ -96,18 +96,18 @@ Your project directory and nobackup directory should both have the "setgid" bit set, so that files created in either directory inherit the project group. !!! prerequisite Warning -The setgid bit only applies the directory's group to files that are -newly created in that directory, or copied to the directory over the -internet. If a file or directory is moved or copied from elsewhere on -the cluster, using for example the `mv` or `cp` command, that file or -directory will keep its original owner and group. Moreover, a -directory moved from elsewhere will probably not have its setgid bit -set, meaning that files and subdirectories later created within that -directory will inherit neither the group nor the setgid bit. -You probably don't want this to happen. For instructions on how to -prevent it, please see our article: [How can I let my fellow project -team members read or write my -files?](https://support.nesi.org.nz/hc/en-gb/articles/360001237915) + The setgid bit only applies the directory's group to files that are + newly created in that directory, or copied to the directory over the + internet. If a file or directory is moved or copied from elsewhere on + the cluster, using for example the `mv` or `cp` command, that file or + directory will keep its original owner and group. Moreover, a + directory moved from elsewhere will probably not have its setgid bit + set, meaning that files and subdirectories later created within that + directory will inherit neither the group nor the setgid bit. + You probably don't want this to happen. For instructions on how to + prevent it, please see our article: [How can I let my fellow project + team members read or write my + files?](https://support.nesi.org.nz/hc/en-gb/articles/360001237915) By default, the world, i.e. people not in the project team, have no privileges in respect of a project directory, with certain exceptions. @@ -116,11 +116,11 @@ Unlike home directories, project directories are set up with ACLs. The default ACL for a project directory is as follows: - The owner of a file or directory is allowed to read, write, execute -and modify the ACL of that file or directory + and modify the ACL of that file or directory - Every member of the file or directory's group is allowed to read, -write and execute the file or directory, but not modify its ACL + write and execute the file or directory, but not modify its ACL - Members of NeSI's support team are allowed to read and execute the -file or directory, but not change it or modify its ACL + file or directory, but not change it or modify its ACL Some projects also have read and execute privileges granted to a group "apache-web02-access". @@ -130,25 +130,25 @@ other is for files and directories that are created in future within that directory. We have set up both of these ACLs to be the same as each other for the two top level project directories. !!! prerequisite Tip -Some project teams, especially those with broader memberships, benefit -from read-only groups. A read-only group gets added to a project's ACL -once, and then individual members can be added to or removed from that -group as required. This approach involves much less editing of file -metadata than adding and removing individuals from the ACLs directly. -If you would like a read-only group created for your project, please -[contact us](https://support.nesi.org.nz/hc/requests/new). + Some project teams, especially those with broader memberships, benefit + from read-only groups. A read-only group gets added to a project's ACL + once, and then individual members can be added to or removed from that + group as required. This approach involves much less editing of file + metadata than adding and removing individuals from the ACLs directly. + If you would like a read-only group created for your project, please + [contact us](https://support.nesi.org.nz/hc/requests/new). The owner of a file or directory may create, edit or revoke that file or directory's ACL and, in the case of a directory, also the directory's default (heritable) ACL. !!! prerequisite Warning -Every time you edit an ACL of a file in the home or persistent project -directory, the file's metadata changes and triggers a backup of that -file. Doing so recursively on a large number of files and directories, -especially if they together amount to a lot of disk space, can strain -our backup system. Please consider carefully before doing a recursive -ACL change, and if possible make the change early on in the life of -the project on NeSI, so that only a few files are affected. + Every time you edit an ACL of a file in the home or persistent project + directory, the file's metadata changes and triggers a backup of that + file. Doing so recursively on a large number of files and directories, + especially if they together amount to a lot of disk space, can strain + our backup system. Please consider carefully before doing a recursive + ACL change, and if possible make the change early on in the life of + the project on NeSI, so that only a few files are affected. ## Other directories diff --git a/docs/Storage/File_Systems_and_Quotas/I-O_Performance_Considerations.md b/docs/Storage/File_Systems_and_Quotas/I-O_Performance_Considerations.md index beb2746f9..380e72159 100644 --- a/docs/Storage/File_Systems_and_Quotas/I-O_Performance_Considerations.md +++ b/docs/Storage/File_Systems_and_Quotas/I-O_Performance_Considerations.md @@ -39,11 +39,11 @@ Māui login (aka build) nodes have native Spectrum Scale clients installed and provide high performance access to storage: - Metadata operations of the order of 190,000 file creates /second to -a unique directory can be expected; + a unique directory can be expected; - For 8MB transfer size, single stream I/O is ~3.3GB/s Write and -~5GB/s Read; + ~5GB/s Read; - For 4KB transfer size, single stream I/O is ~1.3GB/s Write and -~2GB/s Read. + ~2GB/s Read. ## Nodes which access storage via DVS @@ -52,7 +52,7 @@ as DVS (Data Virtualisation Service), to expose the Spectrum Scale file systems to XC compute nodes. DVS adds an additional layer of hardware and software between the XC compute nodes and storage (see Figure). -![cray\_xc50.jpg](../../assets/images/I-O_Performance_Considerations.jpg) + ![cray\_xc50.jpg](../../assets/images/I-O_Performance_Considerations.jpg) Figure 1: Cray XC50 DVS architecture. @@ -63,15 +63,15 @@ Accordingly, the equivalent performance numbers for DVS connected compute nodes are: - Metadata operations of the order of 36,000 file creates /second to a -unique directory can be expected, i.e. approximately 23% of that -achievable on a node that has a Spectrum Scale client. + unique directory can be expected, i.e. approximately 23% of that + achievable on a node that has a Spectrum Scale client. - For 8MB transfer size, single stream I/O, is ~3.2GB/s for Write and -~3.2 GB/s for Read; + ~3.2 GB/s for Read; - For 4KB transfer size, single stream I/O, is ~2.3GB/s for Write and -~2.5GB/s for Read (when using IOBUF - – see Caution below). When -IOBUF is not used Read and -Write performance is <1GB/s. + ~2.5GB/s for Read (when using IOBUF + – see Caution below). When + IOBUF is not used Read and + Write performance is <1GB/s. Unless Cray’s [IOBUF](#_IOBUF_-_Caution) capability is suitable for an application, users should avoid @@ -110,3 +110,4 @@ about tests and results with regards to jobs performance of transparent data compression on the NeSI platforms on our [Data Compression support page](https://support.nesi.org.nz/hc/en-gb/articles/6359601973135). +  \ No newline at end of file diff --git a/docs/Storage/File_Systems_and_Quotas/NeSI_File_Systems_and_Quotas.md b/docs/Storage/File_Systems_and_Quotas/NeSI_File_Systems_and_Quotas.md index 5076c5852..e9d742fee 100644 --- a/docs/Storage/File_Systems_and_Quotas/NeSI_File_Systems_and_Quotas.md +++ b/docs/Storage/File_Systems_and_Quotas/NeSI_File_Systems_and_Quotas.md @@ -25,13 +25,13 @@ zendesk_section_id: 360000033936 [//]: <> (REMOVE ME IF PAGE VALIDATED) !!! prerequisite New Feature -[Transparent File Compression](#h_01GZ2Q7PG53YQEKFDDWTWHHDVT) - we -have recently started rolling out compression of inactive data on the -NeSI Project filesystem. Please see the [documentation -below](#h_01GZ2Q22EAZYMA7E9XG9F5FC1Z) to learn more about how this -works and what data will be compressed. - + [Transparent File Compression](#h_01GZ2Q7PG53YQEKFDDWTWHHDVT) - we + have recently started rolling out compression of inactive data on the + NeSI Project filesystem. Please see the [documentation + below](#h_01GZ2Q22EAZYMA7E9XG9F5FC1Z) to learn more about how this + works and what data will be compressed. +  [Māui](https://support.nesi.org.nz/hc/articles/360000163695) and [Mahuika](https://support.nesi.org.nz/hc/articles/360000163575), along @@ -41,7 +41,7 @@ before that as GPFS, or General Parallel File System - we'll generally refer to it as "Scale" where the context is clear. You may query your actual usage and disk allocations using the following -command: +command:  `$ nn_storage_quota` @@ -219,12 +219,12 @@ interfaces We use Scale soft and hard quotas for both disk space and inodes. - Once you exceed a fileset's soft quota, a one-week countdown timer -starts. When that timer runs out, you will no longer be able to -create new files or write more data in that fileset. You can reset -the countdown timer by dropping down to under the soft quota limit. + starts. When that timer runs out, you will no longer be able to + create new files or write more data in that fileset. You can reset + the countdown timer by dropping down to under the soft quota limit. - You will not be permitted to exceed a fileset's hard quota at all. -Any attempt to try will produce an error; the precise error will -depend on how your software responds to running out of disk space. + Any attempt to try will produce an error; the precise error will + depend on how your software responds to running out of disk space. When quotas are first applied to a fileset, or are reduced, it is possible to end up with more data or files in the fileset than the quota @@ -234,27 +234,27 @@ but will prevent creation of new data or files. #### **Notes:** - You may request an increase in storage and inode quota if needed by -a project. This may in turn be reduced as part of managing overall -risk, where large amounts of quota aren't used for a long period (~6 -Months). + a project. This may in turn be reduced as part of managing overall + risk, where large amounts of quota aren't used for a long period (~6 + Months). - If you need to compile or install a software package that is large -or is intended for use by a project team, please build it -in `/nesi/project/` rather than `/home/`. + or is intended for use by a project team, please build it + in `/nesi/project/` rather than `/home/`. - As the `/nesi/nobackup` file system provides the highest -performance, input files should be moved or copied to this file -system before starting any job that makes use of them. Likewise, job -scripts should be written so as to write output files to the -`/nesi/nobackup` file system. If you wish to keep your data for the -long term, you can include as a final part of your job script an -operation to copy or move the output data to the `/nesi/project` -file system. + performance, input files should be moved or copied to this file + system before starting any job that makes use of them. Likewise, job + scripts should be written so as to write output files to the + `/nesi/nobackup` file system. If you wish to keep your data for the + long term, you can include as a final part of your job script an + operation to copy or move the output data to the `/nesi/project` + file system. - Keep in mind that data on `/nesi/nobackup` is not backed up, -therefore users are advised to move valuable data -to `/nesi/project/`, or, if the data is seldom used, -to other storage such as an institutional storage facility, as soon -as batch jobs are completed. Please do **not** use the `touch` -command to prevent the cleaning policy from removing files, because -this behaviour would deprive the community of a shared resource. + therefore users are advised to move valuable data + to `/nesi/project/`, or, if the data is seldom used, + to other storage such as an institutional storage facility, as soon + as batch jobs are completed. Please do **not** use the `touch` + command to prevent the cleaning policy from removing files, because + this behaviour would deprive the community of a shared resource. ### /home @@ -319,12 +319,12 @@ analyse datasets up to 1 PB in size. ### /nesi/nearline !!! prerequisite Note -The nearline service, including its associated file systems, is in an -Early Access phase, and allocations are by invitation. We appreciate -your patience as we develop, test and deploy this service. If you -would like to participate in the Early Access Programme, please -[contact our support -team](https://support.nesi.org.nz/hc/requests/new). + The nearline service, including its associated file systems, is in an + Early Access phase, and allocations are by invitation. We appreciate + your patience as we develop, test and deploy this service. If you + would like to participate in the Early Access Programme, please + [contact our support + team](https://support.nesi.org.nz/hc/requests/new). The `/nesi/nearline` filesystem is a data cache for the Hierarchical Storage Management System, which automatically manages the movement of diff --git a/docs/Storage/Nearline_long_term_storage/Nearline_Long_Term_Storage_Service.md b/docs/Storage/Nearline_long_term_storage/Nearline_Long_Term_Storage_Service.md index 21f07524e..e87a1d928 100644 --- a/docs/Storage/Nearline_long_term_storage/Nearline_Long_Term_Storage_Service.md +++ b/docs/Storage/Nearline_long_term_storage/Nearline_Long_Term_Storage_Service.md @@ -28,7 +28,7 @@ Nearline Nearline Nearline Nearline Nearline Nearline Nearline Nearline Nearline Nearline Nearline Nearline Nearline Nearline Nearline Nearline Nearline Nearline Nearline Nearline - +  ## Viewing files in nearline @@ -64,11 +64,11 @@ ________________________________________________________________________________ Status ("s" column of the `-s` output) legend: - migrated (**m**) - data of a specific Nearline file is on tape (does -not necessarily mean that the file is replicated across sites) + not necessarily mean that the file is replicated across sites) - pre-migrated (**p**) - data of a specific Nearline file is on both -the staging filesystem and the tape. + the staging filesystem and the tape. - resident (**r**) - data of a specific Nearline file is only on the -staging filesystem. + staging filesystem. **BUG WARNING:** The `-l` and  `-s`flags may fail if the nearline directory has a large amount of files.  You will receive a long Python @@ -87,7 +87,7 @@ Optionally, you can run `nltraverse` with the `-s` command-line switch, which, as with `nlls`, will display the migration status of each file found. - +  **BUG WARNING:** The`-s`flag may fail if a nearline directory has a large amount of files.  You will receive a long Python stack trace if @@ -126,24 +126,24 @@ nlput [ --nowait ] { | } The source directory or file list needs to be located under `/nesi/`**`project`**`/` or `/nesi/`**`nobackup`**`/`and specified as -such. +such.  !!! prerequisite Note -The following will not work: -``` sl -cd /nesi/project/nesi12345 -nlput nesi12345 some_directory -``` -It is necessary to do this instead: -``` sl -nlput nesi12345 /nesi/project/nesi12345/some_directory -``` + The following will not work: + ``` sl + cd /nesi/project/nesi12345 + nlput nesi12345 some_directory + ``` + It is necessary to do this instead: + ``` sl + nlput nesi12345 /nesi/project/nesi12345/some_directory + ``` The data will be mapped into the same directory structure under `/nesi/`**`nearline`**`/` (see below). !!! prerequisite Warning -Please ensure your file or directory names do not contain spaces, -non-standard characters or symbols. This may cause issues when -uploading or downloading files. + Please ensure your file or directory names do not contain spaces, + non-standard characters or symbols. This may cause issues when + uploading or downloading files. The recommended file size to archive is between 1 GB and 1 TB. The client **will not** accept any directory or file list containing any @@ -158,30 +158,30 @@ file list that does not satisfy all these criteria: - Every file must be readable and writable by its owner. - Every file must be readable and writable by its group. - The POSIX group of every file must be the project selected for -upload. + upload. If you are uploading a directory rather than the contents of a file list, the following additional permission restrictions apply: - Every subdirectory must be readable, writable and executable by its -owner. + owner. - Every subdirectory must be readable, writable and executable by its -group. + group. - The POSIX group of every subdirectory must be the project selected -for upload. + for upload. The existing directory structure starting after `/nesi/project//` or `/nesi/nobackup//` will be mapped onto `/nesi/nearline//` !!! prerequisite Warning -Files and directories are checked for existence and only new files are -transferred to Nearline. **Files already on Nearline will not be -updated to reflect newer source files**. Thus, files that already -exist on Nearline (either tape or staging disk) will be skipped in the -migration process, though you should receive a notification of this -If you wish to replace an existing file at a specific file path -(instead of creating a copy at a different file path) then the -original copy on Nearline must be purged. + Files and directories are checked for existence and only new files are + transferred to Nearline. **Files already on Nearline will not be + updated to reflect newer source files**. Thus, files that already + exist on Nearline (either tape or staging disk) will be skipped in the + migration process, though you should receive a notification of this + If you wish to replace an existing file at a specific file path + (instead of creating a copy at a different file path) then the + original copy on Nearline must be purged. `nlput` takes only a directory or a file list. **A single file is treated as a file list** and read line by line, searching for valid file @@ -190,18 +190,18 @@ the full path of the file to be transferred. ### Put - directory !!! prerequisite Warning -If you try to upload to Nearline a path containing spaces, especially -multiple consecutive spaces, you will get some very unexpected -results, such as the job being dropped. We are aware of the issue and -may introduce a fix in a future release. In the meantime, we suggest -avoiding supplying such arguments to `nlput`. You can work around it -by renaming the directory and all its ancestors to avoid spaces, or by -putting the directory (or its ancestor whose name contains a space) -into an archive file. -This problem does not affect when your directory to upload happens to -have contents (files or directories) with spaces in their names, i.e. -to cause a problem the space must be in the name of the directory to -be uploaded or one of its ancestor directories. + If you try to upload to Nearline a path containing spaces, especially + multiple consecutive spaces, you will get some very unexpected + results, such as the job being dropped. We are aware of the issue and + may introduce a fix in a future release. In the meantime, we suggest + avoiding supplying such arguments to `nlput`. You can work around it + by renaming the directory and all its ancestors to avoid spaces, or by + putting the directory (or its ancestor whose name contains a space) + into an archive file. + This problem does not affect when your directory to upload happens to + have contents (files or directories) with spaces in their names, i.e. + to cause a problem the space must be in the name of the directory to + be uploaded or one of its ancestor directories. All files and subdirectories within a specified directory will be transferred into Nearline. The target location maps with the source @@ -215,19 +215,19 @@ will copy all data within the `Results` directory into `/nesi/nearline/nesi12345/`**`To/Archive/Results/`**. !!! prerequisite Warning -If you put `/nesi/`**`project`**`/nesi12345/To/Archive/Results/` on -Nearline as well as -`/nesi/`**`nobackup`**`/nesi12345/To/Archive/Results/`, the contents -of both source locations (`project` and `nobackup`) will be merged -into `/nesi/nearline/nesi12345/To/Archive/Results/`. Within -`/nesi/nearline/nesi12345/`, files with the same name and path will be -skipped. + If you put `/nesi/`**`project`**`/nesi12345/To/Archive/Results/` on + Nearline as well as + `/nesi/`**`nobackup`**`/nesi12345/To/Archive/Results/`, the contents + of both source locations (`project` and `nobackup`) will be merged + into `/nesi/nearline/nesi12345/To/Archive/Results/`. Within + `/nesi/nearline/nesi12345/`, files with the same name and path will be + skipped. ### Put - file list !!! prerequisite Warning -The file list must be located within `/nesi/project` or -`/nesi/nobackup`. Any other location will cause obscure errors and -failures. + The file list must be located within `/nesi/project` or + `/nesi/nobackup`. Any other location will cause obscure errors and + failures. The `file_list` is a file containing a list of files to be transferred. It can specify **only one file per line** and **directories are @@ -240,48 +240,48 @@ The target location will again map with the source location, see above. As a good practice: - migrate only large files (SquashFS archives, tarballs, or files that -are individually large), or directories containing exclusively large -files. + are individually large), or directories containing exclusively large + files. - Do not try to modify a file in the source (nobackup or project) -directory once there is a copy of it on Nearline. + directory once there is a copy of it on Nearline. - Before deleting any data from your project or nobackup directory -that has been uploaded to Nearline, please consider whether you -require [verification of the -transfer](https://support.nesi.org.nz/hc/en-gb/articles/360001482516). -We recommend that you do at least a basic verification of all -transfers. + that has been uploaded to Nearline, please consider whether you + require [verification of the + transfer](https://support.nesi.org.nz/hc/en-gb/articles/360001482516). + We recommend that you do at least a basic verification of all + transfers. If you need to update data on the Nearline file system with a newer version of data from nobackup or project: 1. Compare the contents of the source directory -(on `/nesi/project` or `/nesi/nobackup`) and the target directory -(on `/nesi/nearline`). To look at one directory -on `/nesi/nearline` at a time, use `nlls`; if you need to compare a -large number of files across a range of directories, or for more -thorough verification (e.g. checksums), read [this -article](https://support.nesi.org.nz/hc/en-gb/articles/360001482516) -or [contact our support -team](https://support.nesi.org.nz/hc/requests/new). + (on `/nesi/project` or `/nesi/nobackup`) and the target directory + (on `/nesi/nearline`). To look at one directory + on `/nesi/nearline` at a time, use `nlls`; if you need to compare a + large number of files across a range of directories, or for more + thorough verification (e.g. checksums), read [this + article](https://support.nesi.org.nz/hc/en-gb/articles/360001482516) + or [contact our support + team](https://support.nesi.org.nz/hc/requests/new). 2. Once you know which files you need to update (i.e. only files whose -Nearline version is out of date), remove the old files on Nearline -using `nlpurge`. + Nearline version is out of date), remove the old files on Nearline + using `nlpurge`. 3. Copy the updated files to the Nearline file system using `nlput`. !!! prerequisite Warning -For technical reasons, files (data and metadata) and directory -structures on Nearline cannot be safely changed once present, even by -the system administrators, except by deletion and recreation. If you -wish to rename your files or restructure your directories, you must -follow the process below. + For technical reasons, files (data and metadata) and directory + structures on Nearline cannot be safely changed once present, even by + the system administrators, except by deletion and recreation. If you + wish to rename your files or restructure your directories, you must + follow the process below. If you need to edit data, rename files, or restructure directories that exist on Nearline but are no longer on project or nobackup: 1. Retrieve the files and directories you wish to change using the -`nlget` command (see below). + `nlget` command (see below). 2. Make the changes you wish to make. 3. Follow the instructions above for updating data on Nearline with a -new version of the data from project or nobackup. + new version of the data from project or nobackup. ## Getting/Retrieving files from nearline @@ -297,19 +297,19 @@ Similar to `nlput` (see above), nlget accepts a Nearline** directory** list** `file_list`, defining the source of the data to be retrieved from Nearline. !!! prerequisite Warnings -- The local file list must be located within `/nesi/project` or -`/nesi/nobackup`. Any other location will be rejected. -- Paths to files or directories to be retrieved must be absolute and -start with `/nesi/nearline`, whether supplied on the command line -(as a directory) or as entries in a file list. -- Directories whose names contain spaces, especially multiple -consecutive spaces, cannot be retrieved from Nearline directly -using `nlget`. You must retrieve the contents of such a directory -using a filelist, or retrieve one of its ancestors that doesn't -have a space in the name or path. That is, instead of retrieving -`/nesi/project/nesi12345/ab/c  d` directly, retrieve -`/nesi/project/nesi12345/ab`. We are aware of the problem and may -address it in a later Nearline release. + - The local file list must be located within `/nesi/project` or + `/nesi/nobackup`. Any other location will be rejected. + - Paths to files or directories to be retrieved must be absolute and + start with `/nesi/nearline`, whether supplied on the command line + (as a directory) or as entries in a file list. + - Directories whose names contain spaces, especially multiple + consecutive spaces, cannot be retrieved from Nearline directly + using `nlget`. You must retrieve the contents of such a directory + using a filelist, or retrieve one of its ancestors that doesn't + have a space in the name or path. That is, instead of retrieving + `/nesi/project/nesi12345/ab/c  d` directly, retrieve + `/nesi/project/nesi12345/ab`. We are aware of the problem and may + address it in a later Nearline release. The destination `dest_dir` needs to be defined. The whole directory structure after `/nesi/nearline/` will be created at the destination and @@ -325,11 +325,11 @@ directory structure does not already exist, and copy the data within the `Results` directory into it.  Note that the output pathe will include the project root in the path. !!! prerequisite Warning -Any given file **will not be retrieved** if a file of the same name -already exists in the destination directory. If you wish to retrieve a -new copy of a file that already exists at the destination directory -then you must either change the destination directory, or delete the -existing copy of the file in the that directory. + Any given file **will not be retrieved** if a file of the same name + already exists in the destination directory. If you wish to retrieve a + new copy of a file that already exists at the destination directory + then you must either change the destination directory, or delete the + existing copy of the file in the that directory. `nlget` takes only one directory or one file list. **Single files, if local, are treated as a file list** and read line by line, searching for @@ -356,16 +356,16 @@ is compulsory, and moreover all entries in the file list must denote files within (or supposed to be within) the chosen project's Nearline directory. !!! prerequisite Warnings -- If a file list is used, it must be located within `/nesi/project` -or `/nesi/nobackup` and referred to by its full path starting with -one of those places (symlinks in the path are OK). -- Paths to files or directories to be purged must be absolute and -start with `/nesi/nearline`, whether supplied on the command line -(as a directory) or as entries in a file list. -- Purging the entire Nearline directory for a project, e.g. -`nlpurge /nesi/nearline/nesi12345`, is not permitted. To empty a -project's Nearline directory, you must purge its contents one by -one (if directories), or by means of a filelist (if files). + - If a file list is used, it must be located within `/nesi/project` + or `/nesi/nobackup` and referred to by its full path starting with + one of those places (symlinks in the path are OK). + - Paths to files or directories to be purged must be absolute and + start with `/nesi/nearline`, whether supplied on the command line + (as a directory) or as entries in a file list. + - Purging the entire Nearline directory for a project, e.g. + `nlpurge /nesi/nearline/nesi12345`, is not permitted. To empty a + project's Nearline directory, you must purge its contents one by + one (if directories), or by means of a filelist (if files). ## View nearline  job status @@ -470,30 +470,30 @@ indeed wait times could be hours or even in some cases more than a day. ## Known issues !!! prerequisite Retrievals -Some users of Nearline have reported that attempts to retrieve files -from tape using `nlget` (see below) will not retrieve all files. -Instead, only some files will come back, and the job will finish with -the following output: -``` sl -recall failed some syncs might still run (042) -``` -We are aware of this problem, which is caused by the Nearline job -timing out while waiting for a tape drive to become available. This -problem may also occur if you attempt to retrieve multiple files, -together adding to a large amount of data, from Nearline. -Unfortunately, a proper fix requires a fundamental redesign and -rebuild of the Nearline server architecture, work that is on hold -pending decisions regarding the direction in which we take NeSI's data -services. We appreciate your patience as we work through these -decisions. -In the meantime, if you encounter this problem, the recommended -workaround is to wait a couple of hours (or overnight, if at the end -of a day) and try again once a tape drive is more likely to be free. -You may have to try several times, waiting between each attempt. We -apologise for any inconvenience caused to you by tape drive -contention. - - + Some users of Nearline have reported that attempts to retrieve files + from tape using `nlget` (see below) will not retrieve all files. + Instead, only some files will come back, and the job will finish with + the following output: + ``` sl + recall failed some syncs might still run (042) + ``` + We are aware of this problem, which is caused by the Nearline job + timing out while waiting for a tape drive to become available. This + problem may also occur if you attempt to retrieve multiple files, + together adding to a large amount of data, from Nearline. + Unfortunately, a proper fix requires a fundamental redesign and + rebuild of the Nearline server architecture, work that is on hold + pending decisions regarding the direction in which we take NeSI's data + services. We appreciate your patience as we work through these + decisions. + In the meantime, if you encounter this problem, the recommended + workaround is to wait a couple of hours (or overnight, if at the end + of a day) and try again once a tape drive is more likely to be free. + You may have to try several times, waiting between each attempt. We + apologise for any inconvenience caused to you by tape drive + contention. + +  ## Support contact diff --git a/docs/Storage/Nearline_long_term_storage/Preparing_small_files_for_migration_to_Nearline_storage.md b/docs/Storage/Nearline_long_term_storage/Preparing_small_files_for_migration_to_Nearline_storage.md index 1100b4ea9..40eb8fbf6 100644 --- a/docs/Storage/Nearline_long_term_storage/Preparing_small_files_for_migration_to_Nearline_storage.md +++ b/docs/Storage/Nearline_long_term_storage/Preparing_small_files_for_migration_to_Nearline_storage.md @@ -39,23 +39,23 @@ large archive files, perhaps as few as one. Yes, you certainly can do that. This is unlikely to suit you, however: - Without special options, creating a SquashFS, tarball or other -archive file is effectively taking a copy of the contents of every -file in the directory. Unless your project or nobackup directory -starts out at less than half full, you may well not have the disk -space to create the full file. + archive file is effectively taking a copy of the contents of every + file in the directory. Unless your project or nobackup directory + starts out at less than half full, you may well not have the disk + space to create the full file. - There are options to some archiving programs, including -the `nn_archive_files`, `mksquashfs` and `tar` programs, that will -cause the software to delete files during or just after the -compression process. It is likely, however, that you will want at -least some files to remain in your online storage. + the `nn_archive_files`, `mksquashfs` and `tar` programs, that will + cause the software to delete files during or just after the + compression process. It is likely, however, that you will want at + least some files to remain in your online storage. - There are a few projects that have more than 500 TB of data, and -such an archive file would be too big to be copied to the staging -file system. Even if it were not, however, copying one very large -archive file takes a long time, retrieval takes a long time as well, -and since any interruption to either process will necessitate -starting from scratch, the risk of wasted time increases -(interruptions become more likely, and the likely consequences of -interruptions become more severe). + such an archive file would be too big to be copied to the staging + file system. Even if it were not, however, copying one very large + archive file takes a long time, retrieval takes a long time as well, + and since any interruption to either process will necessitate + starting from scratch, the risk of wasted time increases + (interruptions become more likely, and the likely consequences of + interruptions become more severe). ## What is the recommended option, then? @@ -69,12 +69,12 @@ You do not have to create one single archive file for all small files in in fact you may prefer to create archive files pertaining to particular subdirectories. There is no harm in either approach. !!! prerequisite Tip -The archive creation process can take quite a long time. So that you -can freely log out of the cluster, and to protect the process in case -you're accidentally disconnected, you should create the archive by -means of a Slurm job, or else in a `tmux` or `screen` session. + The archive creation process can take quite a long time. So that you + can freely log out of the cluster, and to protect the process in case + you're accidentally disconnected, you should create the archive by + means of a Slurm job, or else in a `tmux` or `screen` session. -Archive creation is very simple, and can be achieved through the + Archive creation is very simple, and can be achieved through the following: ``` sl @@ -86,49 +86,50 @@ find . -type f -and -size -100M -print0 | xargs -0 -I {} nn_archive_files -p nes cd "${startdir}" ``` -Some notes on the above script: + Some notes on the above script: - The name of the archive is saved as a variable, `$archive_file`, so -that it is kept consistent whenever it is used. + that it is kept consistent whenever it is used. - While we have suggested creating the archive in situ -(`archive_file="archive.squash"`) as an example, there is no reason -not to use a relative or even absolute path -(e.g. `archive_file="/path/to/archive.squash"`). You can also put it -where you started running the sequence of commands from: -`archive_file="${startdir}/archive.squash"`. + (`archive_file="archive.squash"`) as an example, there is no reason + not to use a relative or even absolute path + (e.g. `archive_file="/path/to/archive.squash"`). You can also put it + where you started running the sequence of commands from: + `archive_file="${startdir}/archive.squash"`. - We recommend going to the directory (`cd `) before running the -`find` command, so that the archive stores files as relative paths, -not absolute paths. This choice will make a big difference when you -come to extract the archive. In the example above, we go one step -further: The && means, "Only run the next command if this command is -successful, i.e. it completes with an exit code of 0." + `find` command, so that the archive stores files as relative paths, + not absolute paths. This choice will make a big difference when you + come to extract the archive. In the example above, we go one step + further: The && means, "Only run the next command if this command is + successful, i.e. it completes with an exit code of 0." - The `-type f` option restricts the search to look for files only. -Directories, symbolic links and other items will not be found. -However, files within subdirectories will be found. + Directories, symbolic links and other items will not be found. + However, files within subdirectories will be found. - The `-size -100M` option restricts the search to items that are less -than 100 MB. This size criterion is not the only valid option, but -it likely represents a good balance between creating an overly large -archive on the one hand, and leaving many small files to be -individually copied on the other. + than 100 MB. This size criterion is not the only valid option, but + it likely represents a good balance between creating an overly large + archive on the one hand, and leaving many small files to be + individually copied on the other.  - The conjunction `-and` does exactly what you expect: it limits -search results to items satisfying both criteria. (`find` also -recognises the option `-or`, not relevant here.) + search results to items satisfying both criteria. (`find` also + recognises the option `-or`, not relevant here.) - The option `-print0` separates results with the null character, so -that spaces and other special characters in file names don't get -misinterpreted as record separators. + that spaces and other special characters in file names don't get + misinterpreted as record separators. - Piping to `xargs -0` gracefully handles a long list of arguments -separated by null characters. `xargs` breaks up long lists of -arguments, sending the arguments in small batches to the simple -command given as an argument to `xargs`. In this case, that simple -command is `nn_archive_files` with flags and arguments. + separated by null characters. `xargs` breaks up long lists of + arguments, sending the arguments in small batches to the simple + command given as an argument to `xargs`. In this case, that simple + command is `nn_archive_files` with flags and arguments. - The option `-I {}` to `xargs` instructs `xargs` to replace every -later instance of `{}` with the name of the actual result, in this -case a found file, or more precisely a relative path to a found -file. + later instance of `{}` with the name of the actual result, in this + case a found file, or more precisely a relative path to a found + file. - The `--append` option causes the list of checksums to be appended -to, rather than overwritten. + to, rather than overwritten. - `--delete-files` will delete each found file once that file has been -added to the ever-growing archive. + added to the ever-growing archive. - As given above, the command will submit one, or a series of, Slurm -jobs. You can wait until they're done. + jobs. You can wait until they're done. +  \ No newline at end of file diff --git a/docs/Storage/Nearline_long_term_storage/Verifying_uploads_to_Nearline_storage.md b/docs/Storage/Nearline_long_term_storage/Verifying_uploads_to_Nearline_storage.md index 1409711ea..790fc6db2 100644 --- a/docs/Storage/Nearline_long_term_storage/Verifying_uploads_to_Nearline_storage.md +++ b/docs/Storage/Nearline_long_term_storage/Verifying_uploads_to_Nearline_storage.md @@ -26,17 +26,17 @@ the service to verify their data before deleting it from the project directory (persistent storage) or nobackup directory (temporary storage). !!! prerequisite Service Status -The verification options outlined below are intended to support the -Early Access phase of Nearline development. Verification options may -change as the Early Access Programme continues and as the Nearline -service moves into production. We will update our documentation to -reflect all such changes. -Your feedback on which verification options you think are necessary -will help us decide on future directions for the Nearline service. -Please [contact our support -team](https://support.nesi.org.nz/hc/requests/new) to request -verification or to offer suggestions regarding this or any other -aspect of our Nearline service. + The verification options outlined below are intended to support the + Early Access phase of Nearline development. Verification options may + change as the Early Access Programme continues and as the Nearline + service moves into production. We will update our documentation to + reflect all such changes. + Your feedback on which verification options you think are necessary + will help us decide on future directions for the Nearline service. + Please [contact our support + team](https://support.nesi.org.nz/hc/requests/new) to request + verification or to offer suggestions regarding this or any other + aspect of our Nearline service. There are several options for verification, depending on the level of assurance you require. @@ -49,11 +49,11 @@ of data to Nearline (i.e. `nlput` commands) report `job done successfully`, that gives you a basic level of confidence that the files were in fact copied over to nearline. !!! prerequisite Warning -The above check is reliable only if *all* `nlput` commands were -concerned solely with uploading new files to nearline. Because of the -way `nlput` is designed, a command trying to update files that already -existed on nearline will silently skip those files and still report -success. + The above check is reliable only if *all* `nlput` commands were + concerned solely with uploading new files to nearline. Because of the + way `nlput` is designed, a command trying to update files that already + existed on nearline will silently skip those files and still report + success. ## Level 2: File counts and sizes @@ -83,12 +83,12 @@ will be kept and you will be invited to compare the lists against each other, which you can do using a comparison program such as `diff` or `vimdiff`. !!! prerequisite Warning -The above check is useful only if the corresponding files in -`/nesi/project` and/or `/nesi/nobackup` have not been modified or -deleted, nor any new files added, since they were copied to nearline. -For this reason, if you want to carry out this level of checking, you -should do so as soon as possible after you have established that the -`nlput` operation completed successfully. + The above check is useful only if the corresponding files in + `/nesi/project` and/or `/nesi/nobackup` have not been modified or + deleted, nor any new files added, since they were copied to nearline. + For this reason, if you want to carry out this level of checking, you + should do so as soon as possible after you have established that the + `nlput` operation completed successfully. ## Level 3: Checksums @@ -100,13 +100,13 @@ comparing the checksums to the corresponding original files in identical, it is virtually certain that the files contain the same data, even if their modification dates and times are reported differently. !!! prerequisite Warning -The above check is reliable only if the corresponding file in -`/nesi/project` and/or `/nesi/nobackup` has not been modified since it -was copied to nearline. For this reason, if you want to carry out this -level of checking, you should do so as soon as possible after you have -established that the `nlput` operation completed successfully and the -file has been migrated to tape. -Also, this check is very expensive, so you should not perform it on -large numbers of files or on files that collectively take up a lot of -disk space. Instead, please reserve this level of verification for -your most valuable research data. \ No newline at end of file + The above check is reliable only if the corresponding file in + `/nesi/project` and/or `/nesi/nobackup` has not been modified since it + was copied to nearline. For this reason, if you want to carry out this + level of checking, you should do so as soon as possible after you have + established that the `nlput` operation completed successfully and the + file has been migrated to tape. + Also, this check is very expensive, so you should not perform it on + large numbers of files or on files that collectively take up a lot of + disk space. Instead, please reserve this level of verification for + your most valuable research data. \ No newline at end of file diff --git a/docs/Storage/Release_Notes_Nearline/Long_Term_Storage_Nearline_release_notes_v1-1-0-14.md b/docs/Storage/Release_Notes_Nearline/Long_Term_Storage_Nearline_release_notes_v1-1-0-14.md index bb319b16d..27ce78cd5 100644 --- a/docs/Storage/Release_Notes_Nearline/Long_Term_Storage_Nearline_release_notes_v1-1-0-14.md +++ b/docs/Storage/Release_Notes_Nearline/Long_Term_Storage_Nearline_release_notes_v1-1-0-14.md @@ -27,20 +27,20 @@ zendesk_section_id: 360000502675 This release includes the following changes: - `nlls`, `nlget`, `nlpurge`, `nlput` and `nljobstatus` now come with -a debug mode, accessible via the `--debug` command line switch. + a debug mode, accessible via the `--debug` command line switch. - Help documentation, as well as the usage message when a nearline -command is run with incorrect arguments, has been improved. + command is run with incorrect arguments, has been improved. - `nljobstatus` now includes more comprehensive job status -information. In particular, the job status message now includes a -brief description of the stage the job is up to, and whether the job -is at that moment pending (waiting in a queue to start the next -operation), running, or complete. + information. In particular, the job status message now includes a + brief description of the stage the job is up to, and whether the job + is at that moment pending (waiting in a queue to start the next + operation), running, or complete. - The `nlls` command's `-ls` switch has been replaced with `-s`, -though `-ls` still works, being interpreted as equivalent to -`-l -s`. `nlls` also now comes with a `-b` switch, for reporting -individual sizes in bytes instead of in human-readable sizes. + though `-ls` still works, being interpreted as equivalent to + `-l -s`. `nlls` also now comes with a `-b` switch, for reporting + individual sizes in bytes instead of in human-readable sizes. - `nltraverse` has been improved, and now reports file sizes, and sums -of file sizes, in bytes, for greater accuracy and ease of comparison -with the output of `ls`. + of file sizes, in bytes, for greater accuracy and ease of comparison + with the output of `ls`. - There have been numerous other bug fixes to improve performance and -reduce the risk of unexpected failures and errors. \ No newline at end of file + reduce the risk of unexpected failures and errors. \ No newline at end of file diff --git a/docs/Storage/Release_Notes_Nearline/Long_Term_Storage_Nearline_release_notes_v1-1-0-21.md b/docs/Storage/Release_Notes_Nearline/Long_Term_Storage_Nearline_release_notes_v1-1-0-21.md index 168cb4104..19eb6147d 100644 --- a/docs/Storage/Release_Notes_Nearline/Long_Term_Storage_Nearline_release_notes_v1-1-0-21.md +++ b/docs/Storage/Release_Notes_Nearline/Long_Term_Storage_Nearline_release_notes_v1-1-0-21.md @@ -26,44 +26,44 @@ zendesk_section_id: 360000502675 This is a minor release incorporating bug fixes and improvements. - Certain server errors when a bad job is submitted now generate a -more informative error message in the client program than, "Internal -Server Error." + more informative error message in the client program than, "Internal + Server Error." - Nearline client programs now log to the `~/.librarian` directory, so -you no longer need to explicitly decorate the Nearline command with -complex strings in order to capture basic troubleshooting -information. + you no longer need to explicitly decorate the Nearline command with + complex strings in order to capture basic troubleshooting + information. - A bug causing `nlput` with a file list to fail if any entries in the -file list were missing from Nearline has been fixed. Now, `nlput` -will work even though the file is not already present on Nearline. + file list were missing from Nearline has been fixed. Now, `nlput` + will work even though the file is not already present on Nearline. - `nlput` no longer throws an exception if, when you are prompted for -a y/n response, you hit Enter thereby submitting an empty string. -Instead, it asks the same question again. + a y/n response, you hit Enter thereby submitting an empty string. + Instead, it asks the same question again. - If a local directory into which files are to be retrieved does not -exist, `nlget` will now carry out the retrieval. Previously, `nlget` -would create the directory but then abandon the retrieval. + exist, `nlget` will now carry out the retrieval. Previously, `nlget` + would create the directory but then abandon the retrieval. - We have clarified in help messages that `nlpurge` does not accept a -single file (on Nearline) as the file to be purged. The argument -that is not the project code must be either a directory on Nearline, -or a local file list. + single file (on Nearline) as the file to be purged. The argument + that is not the project code must be either a directory on Nearline, + or a local file list. - A bug has been fixed in the Nearline server whereby the server would -incorrectly calculate the changes to the project's disk space and -file count usage if an `nlpurge` command were to fail (or skip some -files) for any reason after it was accepted by the server. + incorrectly calculate the changes to the project's disk space and + file count usage if an `nlpurge` command were to fail (or skip some + files) for any reason after it was accepted by the server. - `nlpurge` can now be used to delete empty directories from Nearline, -provided the directory is given directly as an argument and not -included in a file list. + provided the directory is given directly as an argument and not + included in a file list. - `nlpurge` deals gracefully with the situation in which a directory -to be purged is not a subdirectory somewhere within the specified -project's Nearline directory, by printing an informative error -message. + to be purged is not a subdirectory somewhere within the specified + project's Nearline directory, by printing an informative error + message. - `nlpurge` will no longer accept a file list argument if any of the -entries in the file list point to files (on Nearline) that are -outside the specified project's Nearline directory. Instead, an -error message will be displayed, listing all affected lines in the -file list. + entries in the file list point to files (on Nearline) that are + outside the specified project's Nearline directory. Instead, an + error message will be displayed, listing all affected lines in the + file list. - A bug that required users to start `nlpurge` file list entries with -`/scale_wlg_nearline/filesets/nearline/` has been fixed. Now, -entries must start with the more intuitive `/nesi/nearline/`. + `/scale_wlg_nearline/filesets/nearline/` has been fixed. Now, + entries must start with the more intuitive `/nesi/nearline/`. - A bug causing `nlls` (and commands depending on it, like -`nltraverse`) to fail if an empty directory is listed or included in -the traverse operation has been fixed. \ No newline at end of file + `nltraverse`) to fail if an empty directory is listed or included in + the traverse operation has been fixed. \ No newline at end of file diff --git a/docs/Storage/Release_Notes_Nearline/Long_Term_Storage_Nearline_release_notes_v1-1-0-22.md b/docs/Storage/Release_Notes_Nearline/Long_Term_Storage_Nearline_release_notes_v1-1-0-22.md index 216eb3e43..24efd4ab6 100644 --- a/docs/Storage/Release_Notes_Nearline/Long_Term_Storage_Nearline_release_notes_v1-1-0-22.md +++ b/docs/Storage/Release_Notes_Nearline/Long_Term_Storage_Nearline_release_notes_v1-1-0-22.md @@ -26,17 +26,17 @@ zendesk_section_id: 360000502675 This is a minor release incorporating bug fixes and improvements. - A bug causing the programs `nlls`, `nltraverse` and `nlcompare` to -misbehave when dealing with invisible files and directories (whose -names start with `.`), and other files and directories whose names -contain unorthodox characters such as spaces or other characters -having special meaning to the shell, has been fixed. + misbehave when dealing with invisible files and directories (whose + names start with `.`), and other files and directories whose names + contain unorthodox characters such as spaces or other characters + having special meaning to the shell, has been fixed. - A bug causing `nlls` to return `Internal Server Error` when the -operator specifies a subdirectory of a project directory that -doesn't exist on Nearline has been fixed. The error -`no such file or directory` is now returned instead. + operator specifies a subdirectory of a project directory that + doesn't exist on Nearline has been fixed. The error + `no such file or directory` is now returned instead. - Some small improvements have been made to server configuration -parsing and detection of inappropriate or missing configuration -values. + parsing and detection of inappropriate or missing configuration + values. During testing of this release, we found that attempts to run `nlput` or `nlget` using arguments containing spaces, especially multiple @@ -44,4 +44,4 @@ consecutive spaces, fail at the Nearline datamover stage while running `rsync`. This issue has been recorded and documented. For now, the recommended workaround is to rename such files or directories before uploading them to Nearline, or, alternatively, to store them in an -archive that does not contain spaces in its name. \ No newline at end of file +archive that does not contain spaces in its name.  \ No newline at end of file diff --git a/docs/Storage/Release_Notes_Nearline/Long_term_Storage_Nearline_release_notes_v1-1-0-18.md b/docs/Storage/Release_Notes_Nearline/Long_term_Storage_Nearline_release_notes_v1-1-0-18.md index 23c87a8c2..bdbdbcd25 100644 --- a/docs/Storage/Release_Notes_Nearline/Long_term_Storage_Nearline_release_notes_v1-1-0-18.md +++ b/docs/Storage/Release_Notes_Nearline/Long_term_Storage_Nearline_release_notes_v1-1-0-18.md @@ -26,37 +26,37 @@ new features. In particular: - To run `nljobstatus` with a particular job ID, you no longer need -the `-j` switch before the job ID. `nljobstatus ` will -suffice. + the `-j` switch before the job ID. `nljobstatus ` will + suffice. - The `nlput` program will now check to see whether any of the files -requested for upload already exist on nearline. If it finds any of -them, it will ask you if you want to continue anyway, warning you -that the already existing files will not be altered or updated by -the nlput process. + requested for upload already exist on nearline. If it finds any of + them, it will ask you if you want to continue anyway, warning you + that the already existing files will not be altered or updated by + the nlput process. - The `nlput` program will also offer to create a filelist of already -existing files, in order to help you more conveniently delete them -from nearline if you wish to replace them with an updated version. -Users taking advantage of this feature are encouraged to review the -filelist after it has been generated, in case there are any files -included that you do not wish to delete. + existing files, in order to help you more conveniently delete them + from nearline if you wish to replace them with an updated version. + Users taking advantage of this feature are encouraged to review the + filelist after it has been generated, in case there are any files + included that you do not wish to delete. - `nlput`, `nlget` and `nlpurge` now verify that files and filelists -are in allowed locations, and (in the case of filelists) that the -individual filelist entries are in allowed locations: -- For `nlput`, all files to be uploaded must be within either -`/nesi/project` or `/nesi/nobackup`, whether they come from a -directory or are specified in a filelist -- For `nlget`, all files to be retrieved must be within -`/nesi/nearline`, and the destination must be within -`/nesi/project` or `/nesi/nobackup` -- For `nlpurge`, all files to be deleted must be within -`/nesi/nearline` -- For `nlput`, `nlget` and `nlpurge` with filelists, the filelist -must be within `/nesi/project` or `/nesi/nobackup` + are in allowed locations, and (in the case of filelists) that the + individual filelist entries are in allowed locations: + - For `nlput`, all files to be uploaded must be within either + `/nesi/project` or `/nesi/nobackup`, whether they come from a + directory or are specified in a filelist + - For `nlget`, all files to be retrieved must be within + `/nesi/nearline`, and the destination must be within + `/nesi/project` or `/nesi/nobackup` + - For `nlpurge`, all files to be deleted must be within + `/nesi/nearline` + - For `nlput`, `nlget` and `nlpurge` with filelists, the filelist + must be within `/nesi/project` or `/nesi/nobackup` - A bug causing projects to be locked indefinitely when `nlput` is -given a filelist as an argument has been fixed. + given a filelist as an argument has been fixed. - An attempt to remove a nonexistent directory from nearline using -`nlpurge` will no longer lock the project. + `nlpurge` will no longer lock the project. - Various bugs causing locks to persist on nearline projects even once -the locking process has ended have been fixed. Previously, many -error conditions causing nearline server tasks to end prematurely -would have left orphaned locks on involved projects. \ No newline at end of file + the locking process has ended have been fixed. Previously, many + error conditions causing nearline server tasks to end prematurely + would have left orphaned locks on involved projects. \ No newline at end of file diff --git a/docs/Storage/Release_Notes_Nearline/Long_term_Storage_Nearline_release_notes_v1-1-0-19.md b/docs/Storage/Release_Notes_Nearline/Long_term_Storage_Nearline_release_notes_v1-1-0-19.md index d41773b19..0a52fabd6 100644 --- a/docs/Storage/Release_Notes_Nearline/Long_term_Storage_Nearline_release_notes_v1-1-0-19.md +++ b/docs/Storage/Release_Notes_Nearline/Long_term_Storage_Nearline_release_notes_v1-1-0-19.md @@ -25,36 +25,36 @@ zendesk_section_id: 360000502675 This release includes a number of significant changes and new features: - The `nltraverse` command is now supported by an `nlcompare` command. -With `nlcompare`, you can compare a directory within `/nesi/project` -or `/nesi/nobackup` with a corresponding directory on -`/nesi/nearline`, and it will show any differences in file names, -sizes, ownerships, permissions and last modified timestamps. Please -note that `nlcompare` does not compare file contents. + With `nlcompare`, you can compare a directory within `/nesi/project` + or `/nesi/nobackup` with a corresponding directory on + `/nesi/nearline`, and it will show any differences in file names, + sizes, ownerships, permissions and last modified timestamps. Please + note that `nlcompare` does not compare file contents. - File size limits are now in place when running `nlput` (not -applicable to `nlget` or `nlpurge`): -- a minimum per-file size limit of 64 MB; -- a maximum per-file size limit of 1 TB. + applicable to `nlget` or `nlpurge`): + - a minimum per-file size limit of 64 MB; + - a maximum per-file size limit of 1 TB. - Permission restrictions are now in place when running `nlput` (not -applicable to `nlget` or `nlpurge`): -- You, as the operator, must be able to read every file selected -for upload. -- The group of every file must match the project code you choose. -If there is a mismatch, it may be that the project code has been -mistyped. -- The permissions of every file must be set so that both the -file's owner and the file's group are allowed to read and write -the file. -- Where a directory (as opposed to a filelist) is specified for -upload, that directory and every subdirectory therein must also -be readable and executable by the operator, belong to the -specified group, and be readable, writable and executable by the -file owner and group. + applicable to `nlget` or `nlpurge`): + - You, as the operator, must be able to read every file selected + for upload. + - The group of every file must match the project code you choose. + If there is a mismatch, it may be that the project code has been + mistyped. + - The permissions of every file must be set so that both the + file's owner and the file's group are allowed to read and write + the file. + - Where a directory (as opposed to a filelist) is specified for + upload, that directory and every subdirectory therein must also + be readable and executable by the operator, belong to the + specified group, and be readable, writable and executable by the + file owner and group. - Attempts to run `nlget` and `nlpurge` on files or directories not -present on nearline will now fail before the job is submitted to the -server, with a clear error message, instead of failing on the server -side, after a delay and with an obscure error message. + present on nearline will now fail before the job is submitted to the + server, with a clear error message, instead of failing on the server + side, after a delay and with an obscure error message. - Certain server errors that previously caused `KeyError` in the -client will now be reported as -`RuntimeError: Internal Server Error`. + client will now be reported as + `RuntimeError: Internal Server Error`. - Server-side logging and tracking with state files have been -improved. \ No newline at end of file + improved. \ No newline at end of file diff --git a/docs/Storage/Release_Notes_Nearline/Long_term_Storage_Nearline_release_notes_v1-1-0-20.md b/docs/Storage/Release_Notes_Nearline/Long_term_Storage_Nearline_release_notes_v1-1-0-20.md index 5c0a439cb..05d229251 100644 --- a/docs/Storage/Release_Notes_Nearline/Long_term_Storage_Nearline_release_notes_v1-1-0-20.md +++ b/docs/Storage/Release_Notes_Nearline/Long_term_Storage_Nearline_release_notes_v1-1-0-20.md @@ -25,30 +25,30 @@ zendesk_section_id: 360000502675 This is a minor release incorporating bug fixes and improvements. - The `nlcompare` command will no longer call attention to differences -between files and directories that are solely due to the expected -difference at the start of the absolute path, i.e. the textual -difference between `/nesi/nearline/` and -`/nesi/project/` (or `/nesi/nobackup/`) -at the start of the path is ignored as irrelevant. `nlcompare` -continues to highlight the differences that might actually matter: -files present on nearline but missing from the project or nobackup -directory (or vice versa), files that have been renamed, and files -with different sizes or last modified times. + between files and directories that are solely due to the expected + difference at the start of the absolute path, i.e. the textual + difference between `/nesi/nearline/` and + `/nesi/project/` (or `/nesi/nobackup/`) + at the start of the path is ignored as irrelevant. `nlcompare` + continues to highlight the differences that might actually matter: + files present on nearline but missing from the project or nobackup + directory (or vice versa), files that have been renamed, and files + with different sizes or last modified times. - The `nlget` command now gives a prompt and informative error message -if you attempt to retrieve a single file from Nearline, instead of, -as previously, submitting the job to the server, which would, after -a wait that might well be lengthy depending on demand for the -service, respond with `pol_failed` or some other uninformative -error. + if you attempt to retrieve a single file from Nearline, instead of, + as previously, submitting the job to the server, which would, after + a wait that might well be lengthy depending on demand for the + service, respond with `pol_failed` or some other uninformative + error. - The `nlls` command now gives a prompt and meaningful error message -if run on a single file with the `-s` command-line switch, instead -of, as previously, returning no results. + if run on a single file with the `-s` command-line switch, instead + of, as previously, returning no results. - The in-program usage message for the `nlpurge` command, which is -printed when the wrong number or type of arguments is supplied, has -been improved. + printed when the wrong number or type of arguments is supplied, has + been improved. - For ease of scripting, client or server errors that occur while the -client program is running and you are requesting a nearline -operation will, in almost all cases, cause the nearline client -program to exit with a non-zero exit code. Note that this is not, -and can not be, the case where the error first occurs after the job -has been accepted by the server for processing. \ No newline at end of file + client program is running and you are requesting a nearline + operation will, in almost all cases, cause the nearline client + program to exit with a non-zero exit code. Note that this is not, + and can not be, the case where the error first occurs after the job + has been accepted by the server for processing. \ No newline at end of file diff --git a/docs/assets/images/OpenFOAM_0.png b/docs/assets/images/OpenFOAM_0.png index eeb1fea2b..461ffd7c6 100644 --- a/docs/assets/images/OpenFOAM_0.png +++ b/docs/assets/images/OpenFOAM_0.png @@ -1,2 +1,2 @@ -AccessDeniedAccess DeniedTPCXYK6GKQV0GHT01WPvinWntmzzLwrRl0CnytydLyv9dwV32JLXa5VeF1GiNPZ+uxi5cEgEM8mu/LwM0o2vPa/CeCQ= \ No newline at end of file +AccessDeniedAccess DeniedKZ298FRRDDES620NhuegenQ4s2hNEX8+gcgqCRXyRmBZvBCGEhj0+2G0oEZnIaQrlTWToH77AqpPnrVad0Eepm0pLWM= \ No newline at end of file